MPI

Message Passing Interface (MPI) is used in high-performance computing (HPC) clusters to facilitate big parallel jobs that utilize multiple compute nodes.

MPI and Slurm

For a tutorial on how to do Slurm reservations for MPI jobs, check out the MPI section of the parallel computing-tutorial.

Installed MPI versions

There are multiple installed MPI versions in the cluster, but due to updates to the underlying network and the operating system some older ones might not be functional.

Therefore it is highly recommended to use the recommended and tested versions of MPI.

Each MPI version will use some underlying compiler by default. Please check here for information on how to change the underlying compiler.

MPI provider

MPI version

GCC compiler

Module name

Extra notes

OpenMPI

4.1.5

gcc/11.3.0

openmpi/4.1.5

OpenMPI

4.0.5

gcc/8.4.0

openmpi/4.0.5

There are known issues with this version, we do not recommend using this for new compilations

Some libraries/programs might have already existing requirement for a certain MPI version. If so, use that version or ask for administrators to create a version of the library with dependency on the MPI version you require.

Warning

Different versions of MPI are not compatible with each other. Each version of MPI will create code that will run correctly with only that version of MPI. Thus if you create code with a certain version, you will need to load the same version of the library when you are running the code.

Also, the MPI libraries are usually linked to slurm and network drivers. Thus, when slurm or driver versions are updated, some older versions of MPI might break. If you’re still using said versions, let us know. If you’re just starting a new project, it is recommended to use our recommended MPI libraries.

Usage

Compiling and running an MPI Hello world-program

The following example uses example codes stored in the hpc-examples-repository. You can get the repository with the following command:

git clone https://github.com/AaltoSciComp/hpc-examples/

Loading module:

module load gcc/11.3.0      # GCC
module load openmpi/4.1.5  # OpenMPI

Compiling the code:

C code is compiled with mpicc:

cd hpc-examples/hello_mpi/
mpicc    -O2 -g hello_mpi.c -o hello_mpi

For testing one might be interested in running the program with srun:

srun --time=00:05:00 --mem-per-cpu=200M --ntasks=4 ./hello_mpi

For actual jobs this is obviously not recommended as any problem with the login node can crash the whole MPI job. Thus we’ll want to run the program with a slurm script:

#!/bin/bash
#SBATCH --time=00:05:00      # takes 5 minutes all together
#SBATCH --mem-per-cpu=200M   # 200MB per process
#SBATCH --ntasks=4           # 4 processes

module load openmpi/4.1.5  # NOTE: should be the same as you used to compile the code
srun ./hello_mpi

Important

It is important to use srun when you launch your program. This allows for the MPI libraries to obtain task placement information (nodes, number of tasks per node etc.) from the slurm queue.

Overwriting default compiler of an MPI installation

Typically one should use the compiler that the MPI installation has been compiled with. Thus if you encounter a situation where you would like to use a different compiler, it might be best to ask the administrators to install a different version of MPI with a different compiler.

However sometimes one can try to overwrite the default compiler. This will obviously be faster than installing newer MPI versions. However, if you encounter problems after switching the complier, you should not use it.

Changing complier when using OpenMPI

The procedure of changing compilers for OpenMPI is documented in OpenMPI’s FAQ. Environment variables such as OMPI_MPICC and OMPI_MPIFC can be set to overwrite the default compiler. See the article for full list of environment variables.

For example, one could use an Intel compiler to compile the Hello world!-example by setting OMPI_MPICC- and OMPI_MPIFC-environment variables.

Intel C compiler is icc:

module load gcc/11.3.0
module load openmpi/4.1.5
module load intel-oneapi-compilers/2021.4.0

export OMPI_MPICC=icc  # Overwrite the C compiler

mpicc    -O2 -g hello_mpi.c -o hello_mpi