crumb trail: > petsc-design > What is PETSc and why?
PETSc is a library with a great many uses, but for now let's say that it's primarily a library for dealing with the sort of linear algebra that comes from discretized PDEs . On a single processor, the basics of such computations can be coded out by a grad student during a semester course in numerical analysis, but on large scale issues get much more complicated and a library becomes indispensible.
PETSc's prime justification is then that it helps you realize scientific computations at large scales, meaning large problem sizes on large numbers of processors.
There are two points to emphasize here:
Remark The PETSc library has hundreds of routines. In this chapter and the next few we will only touch on a basic subset of these. The full list of man pages can be found at
https://petsc.org/release/docs/manualpages/singleindex.html
.
Each man page comes with links to related routines, as well as (usually)
example codes for that routine.
End of remark
crumb trail: > petsc-design > What is PETSc and why? > What is in PETSc?
The routines in PETSc (of which there are hundreds) can roughly be divided in these classes:
crumb trail: > petsc-design > What is PETSc and why? > Programming model
PETSc, being based on MPI, uses the SPMD programming model (section 2.1 ), where all processes execute the same executable. Even more than in regular MPI codes, this makes sense here, since most PETSc objects are collectively created on some communicator, often MPI_COMM_WORLD . With the object-oriented design (section 31.1.3 ) this means that a PETSc program almost looks like a sequential program.
MatMult(A,x,y); // y <- Ax VecCopy(y,res); // r <- y VecAXPY(res,-1.,b); // r <- r - bThis is sometimes called sequential semantics .
crumb trail: > petsc-design > What is PETSc and why? > Design philosophy
PETSc has an object-oriented design, even though it is written in C. There are classes of objects, such \clstinline{Mat} for matrices and \clstinline{Vec} for Vectors, but there is also the \clstinline{KSP} (for "Krylov SPace solver") class of linear system solvers, and \clstinline{PetscViewer} for outputting matrices and vectors to screen or file.
Part of the object-oriented design is the polymorphism of objects: after you have created a \clstinline{Mat} matrix as sparse or dense, all methods such as MatMult (for the matrix-vector product) take the same arguments: the matrix, and an input and output vector.
This design where the programmer manipulates a `handle' also means that the internal of the object, the actual storage of the elements, is hidden from the programmer. This hiding goes so far that even filling in elements is not done directly but through function calls:
VecSetValue(i,j,v,mode) MatSetValue(i,j,v,mode) MatSetValues(ni,is,nj,js,v,mode)
crumb trail: > petsc-design > What is PETSc and why? > Language support
crumb trail: > petsc-design > What is PETSc and why? > Language support > C/C++
PETSc is implemented in C, so there is a natural interface to C. There is no separate C++ interface.
crumb trail: > petsc-design > What is PETSc and why? > Language support > Fortran
A Fortran90 interface exists. The Fortran77 interface is only of interest for historical reasons.
To use Fortran, include both a module and a cpp header file:
#include "petsc/finclude/petscXXX.h" use petscXXX(here XXX stands for one of the PETSc types, but including \flstinline{petsc.h} and using \flstinline{use petsc} gives inclusion of the whole library.)
Variables can be declared with their type (\clstinline{Vec}, \clstinline{Mat}, \clstinline{KSP} et cetera), but internally they are Fortran \clstinline{Type} objects so they can be declared as such.
Example:
#include "petsc/finclude/petscvec.h" use petscvec Vec b type(tVec) x
The output arguments of many query routines are optional in PETSc. While in C a generic NULL can be passed, Fortran has type-specific nulls, such as \indexpetsctt{PETSC_NULL_INTEGER}, \indexpetsctt{PETSC_NULL_OBJECT}.
crumb trail: > petsc-design > What is PETSc and why? > Language support > Python
A python interface was written by Lisandro Dalcin . It can be added to to PETSc at installation time; section 31.3 .
This book discusses the Python interface in short remarks in the appropriate sections.
crumb trail: > petsc-design > What is PETSc and why? > Documentation
PETSc comes with a manual in pdf form and web pages with the documentation for every routine. The starting point is the web page
https://petsc.org/release/documentation/ .
There is also a mailing list with excellent support for questions and bug reports.
TACC note For questions specific to using PETSc on TACC resources, submit tickets to the TACC or XSEDE portal .
crumb trail: > petsc-design > Basics of running a PETSc program
crumb trail: > petsc-design > Basics of running a PETSc program > Compilation
A PETSc compilation needs a number of include and library paths, probably too many to specify interactively. The easiest solution is to create a makefile and load the standard variables and compilation rules. (You can use $PETSC_DIR/share/petsc/Makefile.user for inspiration.)
Throughout, we will assume that variables PETSC_DIR and PETSC_ARCH have been set. These depend on your local installation; see section 31.3 .
In the easiest setup, you leave the compilation to PETSc and your make rules only do the link step, using CLINKER or FLINKER for C/Fortran respectively:
include ${PETSC_DIR}/lib/petsc/conf/variables include ${PETSC_DIR}/lib/petsc/conf/rules program : program.o ${CLINKER} -o $@ $^ ${PETSC_LIB}The two include lines provide the compilation rule and the library variable.
You can use these rules:
$(LINK.F) -o $@ $^ $(LDLIBS) $(COMPILE.F) $(OUTPUT_OPTION) $< $(LINK.cc) -o $@ $^ $(LDLIBS) $(COMPILE.cc) $(OUTPUT_OPTION) $< ## example link rule: # app : a.o b.o c.o # $(LINK.F) -o $@ $^ $(LDLIBS)
(The PETSC_CC_INCLUDES variable contains all paths for compilation of C programs; correspondingly there is PETSC_FC_INCLUDES for Fortran source.)
If don't want to include those configuration files, you can find out the include options by:
cd $PETSC_DIR make getincludedirs make getlinklibsand copying the results into your compilation script.
There is an example makefile $PETSC_DIR/share/petsc/Makefile.user you can take for inspiration. Invoked without arguments it prints out the relevant variables:
[c:246] make -f ! $PETSC_DIR/share/petsc/Makefile.user CC=/Users/eijkhout/Installation/petsc/petsc-3.13/macx-clang-debug/bin/mpicc CXX=/Users/eijkhout/Installation/petsc/petsc-3.13/macx-clang-debug/bin/mpicxx FC=/Users/eijkhout/Installation/petsc/petsc-3.13/macx-clang-debug/bin/mpif90 CFLAGS=-Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -Qunused-arguments -fvisibility=hidden -g3 CXXFLAGS=-Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g FFLAGS=-m64 -g CPPFLAGS=-I/Users/eijkhout/Installation/petsc/petsc-3.13/macx-clang-debug/include -I/Users/eijkhout/Installation/petsc/petsc-3.13/include LDFLAGS=-L/Users/eijkhout/Installation/petsc/petsc-3.13/macx-clang-debug/lib -Wl,-rpath,/Users/eijkhout/Installation/petsc/petsc-3.13/macx-clang-debug/lib LDLIBS=-lpetsc -lm
TACC note On TACC clusters, a petsc installation is loaded by commands such as
module load petsc/3.16Use module avail petsc to see what configurations exist. The basic versions are
# development module load petsc/3.11-debug # production module load petsc/3.11Other installations are real versus complex, or 64bit integers instead of the default 32. The command
module spider petsctells you all the available petsc versions. The listed modules have a naming convention such as petsc/3.11-i64debug where the 3.11 is the PETSc release (minor patches are not included in this version; TACC aims to install only the latest patch, but generally several versions are available), and i64debug describes the debug version of the installation with 64bit integers.
crumb trail: > petsc-design > Basics of running a PETSc program > Running
PETSc programs use MPI for parallelism, so they are started like any other MPI program:
mpiexec -n 5 -machinefile mf \ your_petsc_program option1 option2 option3
TACC note On TACC clusters, use ibrun .
crumb trail: > petsc-design > Basics of running a PETSc program > Initialization and finalization
PETSc has an call that initializes both PETSc and MPI, so normally you would replace MPI_Init by PetscInitialize . Unlike with MPI, you do not want to use a NULL value for the argc,argv arguments, since PETSc makes extensive use of commandline options; see section 38.3 .
// init.c PetscCall( PetscInitialize (&argc,&argv,(char*)0,help) ); int flag; MPI_Initialized(&flag); if (flag) printf("MPI was initialized by PETSc\n"); else printf("MPI not yet initialized\n");
There are two further arguments to PetscInitialize :
Fortran note The Fortran version has no arguments for commandline options; however, you can pass a file of database options:
PetscInitialize(filename,ierr)If none is specified, give \indexpetsctt{PETSC_NULL_CHARACTER} as argument.
For passing help information there is a variant that takes a help string: \fsnippetwithoutput{petschelpf}{examples/petsc/f}{mainhelp}
If your main program is in C, but some of your PETSc calls are in Fortran files, it is necessary to call PetscInitializeFortran after PetscInitialize .
!! init.F90 call PetscInitialize(PETSC_NULL_CHARACTER,ierr) CHKERRA(ierr) call MPI_Initialized(flag,ierr) CHKERRA(ierr) if (flag) then print *,"MPI was initialized by PETSc"
Python note The following works if you don't need commandline options.
from petsc4py import PETScTo pass commandline arguments to PETSc, do:
import sys from petsc4py import init init(sys.argv) from petsc4py import PETSc
After initialization, you can use MPI_COMM_WORLD or PETSC_COMM_WORLD (which is created by MPI_Comm_dup and used internally by PETSc):
MPI_Comm comm = PETSC_COMM_WORLD; MPI_Comm_rank(comm,&mytid); MPI_Comm_size(comm,&ntids);
Python note
comm = PETSc.COMM_WORLD nprocs = comm.getSize(self) procno = comm.getRank(self)
The corresponding call to replace MPI_Finalize is PetscFinalize . You can elegantly capture and return the error code by the idiom
return PetscFinalize();at the end of your main program.
crumb trail: > petsc-design > PETSc installation
PETSc has a large number of installation options. These can roughly be divided into:
For an existing installation, you can find the options used, and other aspects of the build history, in the configure.log / make.log files:
$PETSC_DIR/$PETSC_ARCH/lib/petsc/conf/configure.log $PETSC_DIR/$PETSC_ARCH/lib/petsc/conf/make.log
crumb trail: > petsc-design > PETSc installation > Versions
PETSc is up to version 3.18.x as of this writing. Older versions may miss certain routines, or display certain bugs. However, older versions may also contain routines and keywords that have subsequently been removed. PETSc version are not backwards compatible!
The version is stored in macros PETSC_VERSION , PETSC_VERSION_MAJOR , PETSC_VERSION_MINOR , PETSC_VERSION_SUBMINOR .
For testing, the following macros are defined: PETSC_VERSION_EQ/LT/LE/GT/GE Example:
// cudainit316.c #include <petsc.h> #if PETSC_VERSION_LT(3,17,0) #else #error This program uses APIs abandoned in 3.17 #endif
crumb trail: > petsc-design > PETSc installation > Debug
For any set of options, you will typically make two installations: one with -with-debugging=yes and once no . See section 38.1.1 for more detail on the differences between debug and non-debug mode.
crumb trail: > petsc-design > PETSc installation > Environment options
Compilers, compiler options, MPI.
While it is possible to specify download_mpich , this should only be done on machines that you are certain do not already have an MPI library, such as your personal laptop. Supercomputer clusters are likely to have an optimized MPI library, and letting PETSc download its own will lead to degraded performance.
crumb trail: > petsc-design > PETSc installation > Variants
crumb trail: > petsc-design > External packages
PETSc can extend its functionality through external packages such as mumps , Hypre , fftw . These can be specified in two ways:
--with-hdf5-include=${TACC_HDF5_INC} --with-hf5_lib=${TACC_HDF5_LIB}
--with-parmetis=1 --download-parmetis=1
Python note The Python interface (section 31.1.4.3 ) can be installed with the option
--download-petsc4py=<no,yes,filename,url>This is easiest if your python already includes mpi4py ; see section 1.5.4 .
Remark There are two packages that PETSc is capable of downloading and install, but that you may want to avoid:
crumb trail: > petsc-design > External packages > Slepc
Most external packages add functionality to the lower layers of Petsc. For instance, the Hypre package adds some preconditioners to Petsc's repertoire (section 35.1.7.3 ), while Mumps (section 35.2 ) makes it possible to use the LU preconditioner in parallel.
On the other hand, there are packages that use Petsc as a lower level tool. In particular, the eigenvalue solver package Slepc [slepc-homepage] can be installed through the options
--download-slepc=<no,yes,filename,url> Download and install slepc current: no --download-slepc-commit=commitid The commit id from a git repository to use for the build of slepc current: 0 --download-slepc-configure-arguments=string Additional configure arguments for the build of SLEPcThe slepc header files wind up in the same directory as the petsc headers, so no change to your compilation rules are needed. However, you need to add -lslepc to the link line.