- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

Open MPI: Difference between revisions

From HLRS Platforms
Jump to navigationJump to search
(Took over error section about timeouts from the bwgrid wiki.)
(Added logo.)
Line 1: Line 1:
{{Infobox software
{{Infobox software
| description = '''Open MPI''' is an Message Passing Interface (MPI) library project combining technologies and resources from several other projects (FT-MPI, LA-MPI, LAM/MPI, and PACX-MPI).
| description = '''Open MPI''' is an Message Passing Interface (MPI) library project combining technologies and resources from several other projects (FT-MPI, LA-MPI, LAM/MPI, and PACX-MPI).
| logo = [[Image:open-mpi-logo.png]]
| developer              = Open MPI Development Team
| developer              = Open MPI Development Team
| available on      = [[NEC Nehalem Cluster]]
| available on      = [[NEC Nehalem Cluster]]

Revision as of 11:50, 19 August 2011

Open MPI is an Message Passing Interface (MPI) library project combining technologies and resources from several other projects (FT-MPI, LA-MPI, LAM/MPI, and PACX-MPI).
Open-mpi-logo.png
Developer: Open MPI Development Team
Platforms: NEC Nehalem Cluster
Category: MPI
License: New BSD license
Website: Open MPI homepage


Examples

simple example

This example shows the basic steps when using Open MPI.

Load the necessary module

module load mpi/openmpi


Compile your application using the mpi wrapper compilers mpicc, mpic++ or mpif90:

mpicc your_app.c - o your_app


Now we run our application using 128 processes spread accros 16 nodes in an interactive job (-I option):

qsub -l nodes=16:ppn=8,walltime=6:00:00 -I # get 16 nodes for 6 hours mpirun -np 128 your_app # run your_app using 128 processes


specifying the number of processes per node

Open MPI divides resources in something called 'slots'. By specifying ppn:X to the batchsystem, the number of slots per node is specified. So for a simple MPI job with 8 process per node (=1 process per core) ppn:8 is best choice, as in above example. Details can be specified on mpirun command line. PBS setup is adjusted for ppn:8, please do not use other values.

If you want to use less processes per node e.g. because you are restricted by memory requirements, or you have a hybrid parallel application using MPI and OpenMP, MPI would always put the first 8 processes on the first node, second 8 on second and so on. To avoid this, you can use the -npernode option.

mpirun -np X -npernode 2 your_app

This would start 2 processes per node. Like this, you can use a larger number of nodes with a smaller number of processes, or you can e.g. start threads out of the processes.


process pinning

If you want to pin your processes to a CPU (and enable NUMA memory affinity) use

mpirun -np X --mca mpi_paffinity_alone 1 your_app


Warning: This will not behave as expected for hybrid multi threaded applications (MPI + OpenMP), as the threads will be pinned to a single CPU as well! Use this only if you want to pin one process per core - no extra threads!


thread pinning

For pinning of hybrid MPI/OpenMP, use the following wrapper script

File: thread_pin_wrapper.sh
#!/bin/bash
export KMP_AFFINITY=verbose,scatter           # Intel specific environment variable
export OMP_NUM_THREADS=4

RANK=${OMPI_COMM_WORLD_RANK:=$PMI_RANK}
if [ $(expr $RANK % 2) = 0  ]
then
     export GOMP_CPU_AFFINITY=0-3
     numactl --preferred=0 --cpunodebind=0 $@
else
     export GOMP_CPU_AFFINITY=4-7
     numactl --preferred=1 --cpunodebind=1 $@
fi


Run your application with the following command

mpirun -np X -npernode 2 thread_pin_wrapper.sh your_app


Warning: Do not use the mpi_paffinity_alone option in this case!

Common Problems

InfiniBand retry count

I get an error message about timeouts, what can I do?

    If your parallel programs sometimes crash with an error message like this:
    --------------------------------------------------------------------------
    The InfiniBand retry count between two MPI processes has been
    exceeded.  "Retry count" is defined in the InfiniBand spec 1.2
    (section 12.7.38):
    
        The total number of times that the sender wishes the receiver to
        retry timeout, packet sequence, etc. errors before posting a
        completion error.
    
    This error typically means that there is something awry within the
    InfiniBand fabric itself.  You should note the hosts on which this
    error has occurred; it has been observed that rebooting or removing a
    particular host from the job can sometimes resolve this issue.  
    
    Two MCA parameters can be used to control Open MPI's behavior with
    respect to the retry count:
    
    * btl_openib_ib_retry_count - The number of times the sender will
      attempt to retry (defaulted to 7, the maximum value).
    
    * btl_openib_ib_timeout - The local ACK timeout parameter (defaulted
      to 10).  The actual timeout value used is calculated as:
    
         4.096 microseconds * (2^btl_openib_ib_timeout)
    
      See the InfiniBand spec 1.2 (section 12.7.34) for more details.
    --------------------------------------------------------------------------
    

    This means that the mpi messages can't pass through our infiniband switches before the btl_openib_ib_timeout is over. How often this occurs depends also on the traffic on the network. We have adjusted the parameters such that it should normally work, but if you have compiled your own OpenMPI, maybe also as part of another program package, you might not have adjusted this value correctly. However, you can specify it when calling mpirun:

    mpirun -mca btl_openib_ib_timeout 20 -np ... your-program ...
    

    you can check the preconfigured parameters of the module currently loaded by:

     ompi_info --param btl openib 
    

    where you can grep for the above mentioned parameters.


See also

External links