While the MPI standard itself makes no mention of threads -- process being the primary unit of computation -- the use of threads is allowed. Below we will discuss what provisions exist for doing so.
Using threads and other shared memory models in combination with MPI leads of course to the question how race condition s are handled. Example of a code with a data race that pertains to MPI:
#pragma omp sections #pragma omp section MPI_Send( x, /* to process 2 */ ) #pragma omp section MPI_Recv( x, /* from process 3 */ )The MPI standard here puts the burden on the user: this code is not legal, and behavior is not defined.
crumb trail: > mpi-hybrid > MPI support for threading
In hybrid execution, the main question is whether all threads are allowed to make MPI calls. To determine this, replace the MPI_Init call by MPI_Init_thread Here the required and provided parameters can take the following (monotonically increasing) values:
The main thread is usually the one selected by the master directive, but technically it is the only that executes MPI_Init_thread . If you call this routine in a parallel region, the main thread may be different from the master.
After the initialization call, you can query the support level with MPI_Query_thread .
In case more than one thread performs communication, MPI_Is_thread_main can determine whether a thread is the main thread.
Python note The thread level can be set through the mpi4py.rc object (section 2.2.2 ):
mpi4py.rc.threads # default True mpi4py.rc.thread_level # default "multiple"Available levels are multiple , serialized , funneled , single .
MPL note MPL always calls MPI_Init_thread requesting the highest level MPI_THREAD_MULTIPLE .
enum mpl::threading_modes { mpl::threading_modes::single = MPI_THREAD_SINGLE, mpl::threading_modes::funneled = MPI_THREAD_FUNNELED, mpl::threading_modes::serialized = MPI_THREAD_SERIALIZED, mpl::threading_modes::multiple = MPI_THREAD_MULTIPLE }; threading_modes mpl::environment::threading_mode (); bool mpl::environment::is_thread_main ();End of MPL note
The mpiexec program usually propagates environment variables , so the value of OMP_NUM_THREADS when you call mpiexec will be seen by each MPI process.
Exercise Consider the 2D heat equation and explore the mix of MPI/OpenMP parallelism:
// thread.c MPI_Init_thread(&argc,&argv,MPI_THREAD_MULTIPLE,&threading); comm = MPI_COMM_WORLD; MPI_Comm_rank(comm,&procno); MPI_Comm_size(comm,&nprocs);if (procno==0) { switch (threading) { case MPI_THREAD_MULTIPLE : printf("Glorious multithreaded MPI\n"); break; case MPI_THREAD_SERIALIZED : printf("No simultaneous MPI from threads\n"); break; case MPI_THREAD_FUNNELED : printf("MPI from main thread\n"); break; case MPI_THREAD_SINGLE : printf("no threading supported\n"); break; } } MPI_Finalize();