|
|
(5 intermediate revisions by 2 users not shown) |
Line 1: |
Line 1: |
| === OpenMPI example === | | === OpenMPI === |
|
| |
|
| ==== simple example ====
| | see [[Open MPI]] |
|
| |
|
| To use OpenMPI with intel Compiler, create a .modulerc in your home
| | === Intel MPI === |
| with this contents:
| |
| {{File | filename = .modulerc
| |
| | content = <pre>
| |
| #%Module1.0#
| |
| set version 1.0
| |
| module load compiler/intel
| |
| module load mpi/openmpi
| |
| </pre>
| |
| }}
| |
|
| |
|
| For compilation use the mpi wrapper scripts like mpicc/mpic++/mpif90.
| | see [[Intel MPI]] |
|
| |
|
| The following example is for a pure MPI job, using 16 nodes (128 processes).
| | === MVAPICH2 === |
| For Illustration, this is done using an interactive session (-I option)
| |
|
| |
|
| First step: Batch submit to get the nodes
| | see [[MVAPICH2]] |
| {{Command
| |
| | command = qsub -l nodes=16:nehalem:ppn=8,walltime=6:00:00 -I # get the 16 nodes
| |
| }}
| |
|
| |
|
| In the session you will get after some time, the application is started with
| | === MPI I/O === |
| {{Command
| |
| | command = mpirun -np 128 PathToYourApp
| |
| }}
| |
|
| |
|
| ==== more complex examples ====
| | see [[MPI-IO]] |
| | |
| Open MPI divides resources in something called 'slots'. By specifying <code>ppn:X</code> to the batchsystem, the number of slots per node is specified.
| |
| So for a simple MPI job with 8 process per node (=1 process per core) <code>ppn:8</code> is best choice, as in above example. Details can be specified on <code>mpirun</code> command line. PBS setup is adjusted for ppn:8, please do not use other values.
| |
| | |
| If you want to use less processes per node e.g. because you are restricted by memory per process, or you have a hybrid parallel application using OpenMP and MPI, MPI would always put the first 8 processes on the first node, second 8 on second and so on. To avoid this, you can use the <code>-npernode</code> option.
| |
| {{Command
| |
| | command = mpirun -np X -npernode 2 your_app
| |
| }}
| |
| This would start 2 processes per node. Like this, you can use a larger number of nodes
| |
| with a smaller number of processes, or you can e.g. start threads out of the processes.
| |
| | |
| If you want to pin your processes to a CPU (and enable NUMA memory affinity) use
| |
| {{Command
| |
| | command = mpirun -np X --mca mpi_paffinity_alone 1 your_app
| |
| }}
| |
| | |
| {{Warning
| |
| | text = This will not behave as expected for hybrid multithreaded applications, as the threads will be pinned to a single CPU as well! Use this only in case of one process per core, no extra threads.
| |
| }}
| |
| | |
| For pinning of hybrid OpenMP/MPI, you can use the wrapper from the intel MPI example,
| |
| and do not use mpi_paffinity_alone switch, but
| |
| {{Command
| |
| | command = mpirun -np X -npernode 2 /path/to/wrapper.sh /path/to/app
| |
| }}
| |
| | |
| === Intel MPI example ===
| |
| | |
| ==== simple example ====
| |
| | |
| Load the necessary modules
| |
| {{Command
| |
| | command = module load mpi/impi
| |
| }}
| |
| | |
| Run your application with
| |
| {{Command
| |
| | command = mpirun -r ssh -np 8 your_app
| |
| }}
| |
| | |
| ==== more complex example ====
| |
| | |
| As Nehalem system is a two socket system with local attached ccNUMA memory,
| |
| memory and process placement can be crucial.
| |
| | |
| Here is an example that shows a 16 node Job, using 1 process per socket and 4 threads
| |
| per socket and optimum NUMA placement of processes and memory.
| |
| | |
| Prerequisite: Use intel MPI and best intel compiler
| |
| To setup environment for this, use this .modulerc file in your home:
| |
| | |
| {{File
| |
| | filename = .modulerc
| |
| | content = <pre>
| |
| #%Module1.0#
| |
| set version 1.0
| |
| module load compiler/intel/11.0
| |
| module load mpi/impi/intel-11.0.074-impi-3.2.0.011
| |
| </pre>
| |
| }}
| |
| | |
| And compile your application using mpicc/mpicxx/mpif90 (GNU compiler) or mpiicc/mpiicpc/mpiifort (Intel compiler).
| |
| | |
| First step: Batch submit to get the nodes
| |
| | |
| {{Command
| |
| | command = qsub -l nodes=16:nehalem:ppn=8,walltime=6:00:00 -I # get the 16 nodes
| |
| }}
| |
| | |
| Second step: make a hostlist
| |
| {{Command
| |
| | command = sort -u $PBS_NODEFILE > m
| |
| }}
| |
| | |
| Third step: make a process ring to be used by MPI later
| |
| {{Command
| |
| | command = mpdboot -n 16 -f m -r ssh
| |
| }}
| |
| | |
| Fourth step: start MPI application
| |
| {{Command
| |
| | command = mpiexec -perhost 2 -genv I_MPI_PIN 0 -np 32 ./wrapper.sh ./yourGloriousApp
| |
| }}
| |
| | |
| With wrapper.sh looking like this
| |
| | |
| {{File
| |
| | filename = wrapper.sh
| |
| | content =<pre>
| |
| #!/bin/bash
| |
| export KMP_AFFINITY=verbose,scatter
| |
| export OMP_NUM_THREADS=4
| |
| RANK=${OMPI_COMM_WORLD_RANK:=$PMI_RANK}
| |
| if [ $(expr $RANK % 2) = 0 ]
| |
| then
| |
| export GOMP_CPU_AFFINITY=0-3
| |
| numactl --preferred=0 --cpunodebind=0 $@
| |
| else
| |
| export GOMP_CPU_AFFINITY=4-7
| |
| numactl --preferred=1 --cpunodebind=1 $@
| |
| fi
| |
| </pre>
| |
| }}
| |
| | |
| Result is an application running on 16 nodes, using 32 processes spawning
| |
| 128 threads. One set of 4 threads is pinned to the one socket, the other set of 4 threads to the other socket.
| |
| | |
| === MVAPICH2 example ===
| |
| | |
| ==== simple example ====
| |
| | |
| Load the necessary module
| |
| {{Command
| |
| | command = module load mpi/mvapich2
| |
| }}
| |
| | |
| Run your application with
| |
| {{Command
| |
| | command = mpirun_rsh -np 8 -hostfile $PBS_NODEFILE your_app
| |
| }}
| |