PETSc basics

Experimental html version of Parallel Programming in MPI, OpenMP, and PETSc by Victor Eijkhout. download the textbook at https:/theartofhpc.com/pcse
\[ \newcommand\inv{^{-1}}\newcommand\invt{^{-t}} \newcommand\bbP{\mathbb{P}} \newcommand\bbR{\mathbb{R}} \newcommand\defined{ \mathrel{\lower 5pt \hbox{${\equiv\atop\mathrm{\scriptstyle D}}$}}} \] 31.1 : What is PETSc and why?
31.1.1 : What is in PETSc?
31.1.2 : Programming model
31.1.3 : Design philosophy
31.1.4 : Language support
31.1.4.1 : C/C++
31.1.4.2 : Fortran
31.1.4.3 : Python
31.1.5 : Documentation
31.2 : Basics of running a PETSc program
31.2.1 : Compilation
31.2.2 : Running
31.2.3 : Initialization and finalization
31.3 : PETSc installation
31.3.1 : Versions
31.3.2 : Debug
31.3.3 : Environment options
31.3.4 : Variants
31.4 : External packages
31.4.1 : Slepc
Back to Table of Contents

31 PETSc basics

31.1 What is PETSc and why?

crumb trail: > petsc-design > What is PETSc and why?

PETSc is a library with a great many uses, but for now let's say that it's primarily a library for dealing with the sort of linear algebra that comes from discretized PDEs  . On a single processor, the basics of such computations can be coded out by a grad student during a semester course in numerical analysis, but on large scale issues get much more complicated and a library becomes indispensible.

PETSc's prime justification is then that it helps you realize scientific computations at large scales, meaning large problem sizes on large numbers of processors.

There are two points to emphasize here:

Remark The PETSc library has hundreds of routines. In this chapter and the next few we will only touch on a basic subset of these. The full list of man pages can be found at

https://petsc.org/release/docs/manualpages/singleindex.html  . Each man page comes with links to related routines, as well as (usually) example codes for that routine.
End of remark

31.1.1 What is in PETSc?

crumb trail: > petsc-design > What is PETSc and why? > What is in PETSc?

The routines in PETSc (of which there are hundreds) can roughly be divided in these classes:

31.1.2 Programming model

crumb trail: > petsc-design > What is PETSc and why? > Programming model

PETSc, being based on MPI, uses the SPMD programming model (section  2.1  ), where all processes execute the same executable. Even more than in regular MPI codes, this makes sense here, since most PETSc objects are collectively created on some communicator, often MPI_COMM_WORLD  . With the object-oriented design (section  31.1.3  ) this means that a PETSc program almost looks like a sequential program.

MatMult(A,x,y);      // y <- Ax
VecCopy(y,res);      // r <- y
VecAXPY(res,-1.,b);  // r <- r - b
This is sometimes called sequential semantics  .

31.1.3 Design philosophy

crumb trail: > petsc-design > What is PETSc and why? > Design philosophy

PETSc has an object-oriented design, even though it is written in C. There are classes of objects, such \clstinline{Mat} for matrices and \clstinline{Vec} for Vectors, but there is also the \clstinline{KSP} (for "Krylov SPace solver") class of linear system solvers, and \clstinline{PetscViewer} for outputting matrices and vectors to screen or file.

Part of the object-oriented design is the polymorphism of objects: after you have created a \clstinline{Mat} matrix as sparse or dense, all methods such as MatMult (for the matrix-vector product) take the same arguments: the matrix, and an input and output vector.

This design where the programmer manipulates a `handle' also means that the internal of the object, the actual storage of the elements, is hidden from the programmer. This hiding goes so far that even filling in elements is not done directly but through function calls:

VecSetValue(i,j,v,mode)
MatSetValue(i,j,v,mode)
MatSetValues(ni,is,nj,js,v,mode)

31.1.4 Language support

crumb trail: > petsc-design > What is PETSc and why? > Language support

31.1.4.1 C/C++

crumb trail: > petsc-design > What is PETSc and why? > Language support > C/C++

PETSc is implemented in C, so there is a natural interface to C. There is no separate C++ interface.

31.1.4.2 Fortran

crumb trail: > petsc-design > What is PETSc and why? > Language support > Fortran

Fortran90 interface exists. The Fortran77 interface is only of interest for historical reasons.

To use Fortran, include both a module and a cpp header file:

#include "petsc/finclude/petscXXX.h"
use petscXXX
(here XXX stands for one of the PETSc types, but including \flstinline{petsc.h} and using \flstinline{use petsc} gives inclusion of the whole library.)

Variables can be declared with their type (\clstinline{Vec}, \clstinline{Mat}, \clstinline{KSP} et cetera), but internally they are Fortran \clstinline{Type} objects so they can be declared as such.

Example:

#include "petsc/finclude/petscvec.h"
use petscvec
Vec b
type(tVec) x

The output arguments of many query routines are optional in PETSc. While in C a generic NULL can be passed, Fortran has type-specific nulls, such as \indexpetsctt{PETSC_NULL_INTEGER}, \indexpetsctt{PETSC_NULL_OBJECT}.

31.1.4.3 Python

crumb trail: > petsc-design > What is PETSc and why? > Language support > Python

A python interface was written by Lisandro Dalcin  . It can be added to to PETSc at installation time; section  31.3  .

This book discusses the Python interface in short remarks in the appropriate sections.

31.1.5 Documentation

crumb trail: > petsc-design > What is PETSc and why? > Documentation

PETSc comes with a manual in pdf form and web pages with the documentation for every routine. The starting point is the web page

https://petsc.org/release/documentation/  .

There is also a mailing list with excellent support for questions and bug reports.

TACC note For questions specific to using PETSc on TACC resources, submit tickets to the TACC or XSEDE portal  .

31.2 Basics of running a PETSc program

crumb trail: > petsc-design > Basics of running a PETSc program

31.2.1 Compilation

crumb trail: > petsc-design > Basics of running a PETSc program > Compilation

A PETSc compilation needs a number of include and library paths, probably too many to specify interactively. The easiest solution is to create a makefile and load the standard variables and compilation rules. (You can use $PETSC_DIR/share/petsc/Makefile.user for inspiration.)

Throughout, we will assume that variables PETSC_DIR and PETSC_ARCH have been set. These depend on your local installation; see section  31.3  .

In the easiest setup, you leave the compilation to PETSc and your make rules only do the link step, using CLINKER or FLINKER for C/Fortran respectively:

include ${PETSC_DIR}/lib/petsc/conf/variables
include ${PETSC_DIR}/lib/petsc/conf/rules
program : program.o
        ${CLINKER} -o $@ $^ ${PETSC_LIB}
The two include lines provide the compilation rule and the library variable.

You can use these rules:

        $(LINK.F) -o $@ $^ $(LDLIBS)
        $(COMPILE.F) $(OUTPUT_OPTION) $<
        $(LINK.cc) -o $@ $^ $(LDLIBS)
        $(COMPILE.cc) $(OUTPUT_OPTION) $<

## example link rule:
# app : a.o b.o c.o
#       $(LINK.F) -o $@ $^ $(LDLIBS)

(The PETSC_CC_INCLUDES variable contains all paths for compilation of C programs; correspondingly there is PETSC_FC_INCLUDES for Fortran source.)

If don't want to include those configuration files, you can find out the include options by:

cd $PETSC_DIR
make getincludedirs
make getlinklibs
and copying the results into your compilation script.

There is an example makefile $PETSC_DIR/share/petsc/Makefile.user you can take for inspiration. Invoked without arguments it prints out the relevant variables:

[c:246] make -f ! $PETSC_DIR/share/petsc/Makefile.user
CC=/Users/eijkhout/Installation/petsc/petsc-3.13/macx-clang-debug/bin/mpicc
CXX=/Users/eijkhout/Installation/petsc/petsc-3.13/macx-clang-debug/bin/mpicxx
FC=/Users/eijkhout/Installation/petsc/petsc-3.13/macx-clang-debug/bin/mpif90
CFLAGS=-Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -Qunused-arguments -fvisibility=hidden -g3
CXXFLAGS=-Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g
FFLAGS=-m64 -g
CPPFLAGS=-I/Users/eijkhout/Installation/petsc/petsc-3.13/macx-clang-debug/include -I/Users/eijkhout/Installation/petsc/petsc-3.13/include
LDFLAGS=-L/Users/eijkhout/Installation/petsc/petsc-3.13/macx-clang-debug/lib -Wl,-rpath,/Users/eijkhout/Installation/petsc/petsc-3.13/macx-clang-debug/lib
LDLIBS=-lpetsc -lm

TACC note On TACC clusters, a petsc installation is loaded by commands such as

module load petsc/3.16
Use module avail petsc to see what configurations exist. The basic versions are
# development
module load petsc/3.11-debug
# production
module load petsc/3.11
Other installations are real versus complex, or 64bit integers instead of the default 32. The command
module spider petsc
tells you all the available petsc versions. The listed modules have a naming convention such as petsc/3.11-i64debug where the 3.11 is the PETSc release (minor patches are not included in this version; TACC aims to install only the latest patch, but generally several versions are available), and i64debug describes the debug version of the installation with 64bit integers.

31.2.2 Running

crumb trail: > petsc-design > Basics of running a PETSc program > Running

PETSc programs use MPI for parallelism, so they are started like any other MPI program:

mpiexec -n 5 -machinefile mf \
    your_petsc_program option1 option2 option3

TACC note On TACC clusters, use ibrun  .

31.2.3 Initialization and finalization

crumb trail: > petsc-design > Basics of running a PETSc program > Initialization and finalization

PETSc has an call that initializes both PETSc and MPI, so normally you would replace MPI_Init by PetscInitialize  . Unlike with MPI, you do not want to use a NULL value for the argc,argv arguments, since PETSc makes extensive use of commandline options; see section  38.3  .

// init.c
PetscCall( PetscInitialize
  (&argc,&argv,(char*)0,help) );
int flag;
MPI_Initialized(&flag);
if (flag)
  printf("MPI was initialized by PETSc\n");
else
  printf("MPI not yet initialized\n");

There are two further arguments to PetscInitialize :

  1. the name of an options database file; and
  2. a help string, that is displayed if you run your program with the -h option.

Fortran note The Fortran version has no arguments for commandline options; however, you can pass a file of database options:

PetscInitialize(filename,ierr)    
If none is specified, give \indexpetsctt{PETSC_NULL_CHARACTER} as argument.

For passing help information there is a variant that takes a help string: \fsnippetwithoutput{petschelpf}{examples/petsc/f}{mainhelp}

If your main program is in C, but some of your PETSc calls are in Fortran files, it is necessary to call PetscInitializeFortran after PetscInitialize  .

!! init.F90
  call PetscInitialize(PETSC_NULL_CHARACTER,ierr)
  CHKERRA(ierr)
  call MPI_Initialized(flag,ierr)
  CHKERRA(ierr)
  if (flag) then
     print *,"MPI was initialized by PETSc"
End of Fortran note

Python note The following works if you don't need commandline options.

from petsc4py import PETSc
To pass commandline arguments to PETSc, do:
import sys
from petsc4py import init
init(sys.argv)
from petsc4py import PETSc

After initialization, you can use MPI_COMM_WORLD or PETSC_COMM_WORLD (which is created by MPI_Comm_dup and used internally by PETSc):

MPI_Comm comm = PETSC_COMM_WORLD;
MPI_Comm_rank(comm,&mytid);
MPI_Comm_size(comm,&ntids);

Python note

comm = PETSc.COMM_WORLD
nprocs = comm.getSize(self) 
procno = comm.getRank(self)

The corresponding call to replace MPI_Finalize is PetscFinalize  . You can elegantly capture and return the error code by the idiom

return PetscFinalize();
at the end of your main program.

31.3 PETSc installation

crumb trail: > petsc-design > PETSc installation

PETSc has a large number of installation options. These can roughly be divided into:

  1. Options to describe the environment in which PETSc is being installed, such as the names of the compilers or the location of the MPI library;
  2. Options to specify the type of PETSc installation: real versus complex, 32 versus 64-bit integers, et cetera;
  3. Options to specify additional packages to download.

For an existing installation, you can find the options used, and other aspects of the build history, in the configure.log  / make.log files:

$PETSC_DIR/$PETSC_ARCH/lib/petsc/conf/configure.log
$PETSC_DIR/$PETSC_ARCH/lib/petsc/conf/make.log

31.3.1 Versions

crumb trail: > petsc-design > PETSc installation > Versions

PETSc is up to version 3.18.x as of this writing. Older versions may miss certain routines, or display certain bugs. However, older versions may also contain routines and keywords that have subsequently been removed. PETSc version are not backwards compatible!

The version is stored in macros PETSC_VERSION  , PETSC_VERSION_MAJOR  , PETSC_VERSION_MINOR  , PETSC_VERSION_SUBMINOR  .

For testing, the following macros are defined: PETSC_VERSION_EQ/LT/LE/GT/GE Example:

// cudainit316.c
#include <petsc.h>
#if PETSC_VERSION_LT(3,17,0)
#else
#error This program uses APIs abandoned in 3.17
#endif

31.3.2 Debug

crumb trail: > petsc-design > PETSc installation > Debug

For any set of options, you will typically make two installations: one with -with-debugging=yes and once no  . See section  38.1.1 for more detail on the differences between debug and non-debug mode.

31.3.3 Environment options

crumb trail: > petsc-design > PETSc installation > Environment options

Compilers, compiler options, MPI.

While it is possible to specify download_mpich  , this should only be done on machines that you are certain do not already have an MPI library, such as your personal laptop. Supercomputer clusters are likely to have an optimized MPI library, and letting PETSc download its own will lead to degraded performance.

31.3.4 Variants

crumb trail: > petsc-design > PETSc installation > Variants

31.4 External packages

crumb trail: > petsc-design > External packages

PETSc can extend its functionality through external packages such as mumps  , Hypre  , fftw  . These can be specified in two ways:

  1. Referring to an installation already on your system:
    --with-hdf5-include=${TACC_HDF5_INC}
    --with-hf5_lib=${TACC_HDF5_LIB}
    
  2. By letting petsc download and install them itself:
    --with-parmetis=1 --download-parmetis=1
    

Python note The Python interface (section  31.1.4.3  ) can be installed with the option

--download-petsc4py=<no,yes,filename,url>
This is easiest if your python already includes mpi4py ; see section  1.5.4  .

Remark There are two packages that PETSc is capable of downloading and install, but that you may want to avoid:


End of remark

31.4.1 Slepc

crumb trail: > petsc-design > External packages > Slepc

Most external packages add functionality to the lower layers of Petsc. For instance, the Hypre package adds some preconditioners to Petsc's repertoire (section  35.1.7.3  ), while Mumps (section  35.2  ) makes it possible to use the LU preconditioner in parallel.

On the other hand, there are packages that use Petsc as a lower level tool. In particular, the eigenvalue solver package Slepc   [slepc-homepage] can be installed through the options

--download-slepc=<no,yes,filename,url>
       Download and install slepc  current: no
--download-slepc-commit=commitid
       The commit id from a git repository to use for the build of slepc  current: 0
--download-slepc-configure-arguments=string
       Additional configure arguments for the build of SLEPc
The slepc header files wind up in the same directory as the petsc headers, so no change to your compilation rules are needed. However, you need to add -lslepc to the link line.
Back to Table of Contents