- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -
Intel MPI
Intel MPI Library focuses on making applications perform better on Intel architecture-based clusters—implementing the high performance MPI-3 specification on multiple fabrics. |
|
Examples
simple example
This example shows the basic steps when using Intel MPI.
Load the necessary module
module load mpi/impi
Compile your application using the mpi wrapper compilers mpicc, mpicxx and mpif90.
Run your application
mpirun -np 8 /path/to/your_app/your_app
thread pinning
This example shows how to run an application on 16 nodes, using 32 processes spawning 128 threads with sets of 4 threads being pinned to a single CPU socket. This will give you optimum NUMA placement of processes and memory e.g. on the the nehalem nodes of the Laki system.
Best use Intel MPI in combination with Intel compiler.
module load compiler/intel
module load mpi/impi
Compile your application as shown in the simple example above.
qsub -l nodes=16:ppn=8,walltime=6:00:00 -I # get 16 nodes for interactive usage
sort -u $PBS_NODEFILE > m # generate a hostlist
mpdboot -n 16 -f m -r ssh # build a process ring to be used by MPI later
Run the application using the thread_pin_wrapper.sh script shown below.
mpirun -f $PBS_NODEFILE -np 32 -perhost 2 -genv I_MPI_PIN_DOMAIN=auto -genv KMP_AFFINITY=verbose,scatter,granularity=thread ./thread_pin_wrapper.sh /absolute/path/to/your_app
#!/bin/bash export KMP_AFFINITY=verbose,scatter # Intel specific environment variable export OMP_NUM_THREADS=4 RANK=${OMPI_COMM_WORLD_RANK:=$PMI_RANK} if [ $(expr $RANK % 2) = 0 ] then export GOMP_CPU_AFFINITY=0-3 numactl --preferred=0 --cpunodebind=0 $@ else export GOMP_CPU_AFFINITY=4-7 numactl --preferred=1 --cpunodebind=1 $@ fi