- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -

Difference between revisions of "Intel MPI"

From HLRS Platforms
Jump to navigationJump to search
Line 74: Line 74:
== External links ==
== External links ==
* [http://software.intel.com/en-us/intel-mpi-library/ Intel MPI homepage]
* [http://software.intel.com/en-us/intel-mpi-library/ Intel MPI homepage]
* [http://software.intel.com/en-us/articles/intel-mpi-library-documentation/ Intel MPI documentation]

Revision as of 12:39, 19 July 2012

Intel MPI Library focuses on making applications perform better on Intel architecture-based clusters—implementing the high performance MPI-2 specification on multiple fabrics.
Developer: Intel
Platforms: NEC Nehalem Cluster
Category: MPI
License: Commercial
Website: Intel MPI homepage


simple example

This example shows the basic steps when using Intel MPI.

Load the necessary module

module load mpi/impi

Compile your application using the mpi wrapper compilers mpicc, mpicxx and mpif90.

Note: You will not find a Intel Compiler version for Intel MPI on most systems. If you want to use Intel MPI in combination with the Intel Compiler load the GNU version of Intel MPI and the compiler/intel module. For compilation call the Intel specific wrapper compilers mpiicc, mpiicpc and mpiifort.

Run your application

mpirun -r ssh -np 8 /path/to/your_app/your_app

thread pinning

This example shows how to run an application on 16 nodes, using 32 processes spawning 128 threads with sets of 4 threads being pinned to a single CPU socket. This will give you optimum NUMA placement of processes and memory e.g. on the NEC Nehalem Cluster.

Best use Intel MPI in combination with Intel compiler.

module load compiler/intel module load mpi/impi

Compile your application as shown in the simple example above.

qsub -l nodes=16:ppn=8,walltime=6:00:00 -I # get 16 nodes for interactive usage sort -u $PBS_NODEFILE > m # generate a hostlist mpdboot -n 16 -f m -r ssh # build a process ring to be used by MPI later

Run the application using the thread_pin_wrapper.sh script shown below.

mpiexec -perhost 2 -genv I_MPI_PIN 0 -np 32 ./thread_pin_wrapper.sh /absolute/path/to/your_app

File: thread_pin_wrapper.sh
export KMP_AFFINITY=verbose,scatter           # Intel specific environment variable

if [ $(expr $RANK % 2) = 0  ]
     export GOMP_CPU_AFFINITY=0-3
     numactl --preferred=0 --cpunodebind=0 $@
     export GOMP_CPU_AFFINITY=4-7
     numactl --preferred=1 --cpunodebind=1 $@

See also

External links