Message Passing Interface (MPI) is used in high-performance computing (HPC) clusters to facilitate big parallel jobs that utilize multiple compute nodes.
MPI and Slurm¶
For a tutorial on how to do Slurm reservations for MPI jobs, check out the MPI section of the parallel computing-tutorial.
Installed MPI versions¶
There are multiple installed MPI versions in the cluster, but due to updates to the underlying network and the operating system some older ones might not be functional.
Therefore it is highly recommended to use the recommended and tested versions of MPI.
Each MPI version will use some underlying compiler by default. Please check here for information on how to change the underlying compiler.
Some libraries/programs might have already existing requirement for a certain MPI version. If so, use that version or ask for administrators to create a version of the library with dependency on the MPI version you require.
Different versions of MPI are not compatible with each other. Each version of MPI will create code that will run correctly with only that version of MPI. Thus if you create code with a certain version, you will need to load the same version of the library when you are running the code.
Also, the MPI libraries are usually linked to slurm and network drivers. Thus, when slurm or driver versions are updated, some older versions of MPI might break. If you’re still using said versions, let us know. If you’re just starting a new project, it is recommended to use our recommended MPI libraries.
Compiling and running an MPI Hello world-program¶
The following example uses example codes stored in the hpc-examples-repository. You can get the repository with the following command:
git clone https://github.com/AaltoSciComp/hpc-examples/
module load gcc/8.4.0 # GCC module load openmpi/4.0.5 # OpenMPI
Compiling the code:
C code is compiled with
cd hpc-examples/hello_mpi/ mpicc -O2 -g hello_mpi.c -o hello_mpi
Fortran code is compiled with
cd hpc-examples/hello_mpi_fortran/ # fortran mpifort -O2 -g hello_mpi_fortran.f90 -o hello_mpi_fortran # Fortran code
For testing one might be interested in running the program with srun:
srun --time=00:05:00 --mem-per-cpu=200M --ntasks=4 ./hello_mpi
For actual jobs this is obviously not recommended as any problem with the login node can crash the whole MPI job. Thus we’ll want to run the program with a slurm script:
#!/bin/bash #SBATCH --time=00:05:00 # takes 5 minutes all together #SBATCH --mem-per-cpu=200M # 200MB per process #SBATCH --ntasks=4 # 4 processes module load openmpi/4.0.5 # NOTE: should be the same as you used to compile the code srun ./hello_mpi
It is important to use
srun when you launch your program.
This allows for the MPI libraries to obtain task placement information
(nodes, number of tasks per node etc.) from the slurm queue.
Overwriting default compiler of an MPI installation¶
Typically one should use the compiler that the MPI installation has been compiled with. Thus if you encounter a situation where you would like to use a different compiler, it might be best to ask the administrators to install a different version of MPI with a different compiler.
However sometimes one can try to overwrite the default compiler. This will obviously be faster than installing newer MPI versions. However, if you encounter problems after switching the complier, you should not use it.
Changing complier when using OpenMPI¶
The procedure of changing compilers for OpenMPI is documented in
Environment variables such as
OMPI_MPIFC can be
set to overwrite the default compiler. See the article for full list
of environment variables.
For example, one could use an
to compile the Hello world!-example by setting
Intel C compiler is
module load gcc/8.4.0 module load openmpi/4.0.5 module load intel-oneapi-compilers/2021.4.0 export OMPI_MPICC=icc # Overwrite the C compiler mpicc -O2 -g hello_mpi.c -o hello_mpi
Intel Fortran compiler is
module load gcc/8.4.0 module load openmpi/4.0.5 module load intel-oneapi-compilers/2021.4.0 export OMPI_MPIFC=ifort # Overwrite the Fortran compiler mpicc -O2 -g hello_mpi.c -o hello_mpi