- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

Batch system: Difference between revisions

From HLRS Platforms
Jump to navigationJump to search
No edit summary
No edit summary
Line 54: Line 54:
=== Multithreaded job ===
=== Multithreaded job ===


  ws=`ws_allocate myimportantdata 10`   # get a workspace for 10 days
  ws=`ws_allocate myimportantdata 10`   # get a workspace for 10 days
  cd $ws                               # go there
  cd $ws                                 # go there
  cp ~/input/file.dat .                 # get some data
  cp $HOME/input/file.dat .             # get some data
  export OMP_NUM_THREADS=8             # use 8 OMP threads
  export OMP_NUM_THREADS=8               # use 8 OMP threads
  export F_PROGINF=DETAIL               # get some performance information after the run
  export F_PROGINF=DETAIL               # get some performance information after the run
  ~/bin/myApp                           # run my application
  $HOME/bin/myApp                       # run my application
  cp output.dat ~/output
  cp output.dat $HOME/output


=== MPI job ===
=== MPI job ===


  ws=`ws_allocate myimportantdata 10`   # get a workspace for 10 days
  ws=`ws_allocate myimportantdata 10`   # get a workspace for 10 days
  cd $ws                               # go there
  cd $ws                                 # go there
  cp ~/input/file.dat .                 # get some data
  cp $HOME/input/file.dat .             # get some data
  export MPIPROGINF=DETAIL
  export MPIPROGINF=DETAIL
  mpirun -nn 2 -nnp 16 ~/bin/myApp     # run my application on 2 nodes, 16 CPUs each (32 total)
  mpirun -nn 2 -nnp 16 $HOME/bin/myApp   # run my application on 2 nodes, 16 CPUs each (32 total)
  cp output.dat ~/output
  cp output.dat $HOME/output
 
== Hybrid OpenMP and MPI job ==
 
SCR=`ws_allocate MyWorkspace 2`   
cd $SCR
export OMP_NUM_THREADS=16              # 16 threads
export MPIPROGINF=YES
export MPIMULTITASKMIX=YES
MPIEXPORT="OMP_NUM_THREADS"  # make this environment known to all nodes
export MPIEXPORT
mpirun -nn 4 -nnp 1 $HOME/bin//mycode  # run on 4 nodes using 1 process per node, but 16 threads
cp outfile $HOME/output

Revision as of 11:02, 27 November 2008

Job Examples

Job sample large job, will be executed in '?multi' on v901-v907

#PBS -q dq
#PBS -l cpunum_job=16           # cpus per Node
#PBS -b 2                       # number of nodes, max 4 at the moment
#PBS -l elapstim_req=12:00:00   # max wallclock time
#PBS -l cputim_job=192:00:00    # max accumulated cputime per node
#PBS -l cputim_prc=11:55:00     # max accumulated cputime per node
#PBS -l memsz_job=500gb         # memory per node
#PBS -A <acctcode>              # Your Account code, see login message, without <>
#PBS -j o                       # join stdout/stderr
#PBS -T mpisx                   # Job type: mpisx for MPI
#PBS -N MyJob                   # job name
#PBS -M MyMail@mydomain         # you should always specify your emai

Job sample small job, will be executed in '?single' on v900 in shared mode, other jobs will run on same node.

#PBS -q dq
#PBS -l cpunum_job=8            # cpus per Node
#PBS -b 1                       # number of nodes
#PBS -l elapstim_req=12:00:00   # max wallclock time
#PBS -l cputim_job=192:00:00    # max accumulated cputime per node
#PBS -l cputim_prc=11:55:00     # max accumulated cputime per node
#PBS -l memsz_job=64gb          # memory per node
#PBS -A <acctcode>              # Your Account code, see login message, without <>
#PBS -j o                       # join stdout/stderr
#PBS -T mpisx                   # Job type: mpisx for MPI
#PBS -N MyJob                   # job name
#PBS -M MyMail@mydomain         # you should always specify your emai

Job sample test job, will be executed in 'test', always one job running, no matter how loaded node is.

#PBS -q dq
#PBS -l cpunum_job=4            # cpus per Node
#PBS -b 1                       # number of nodes
#PBS -l elapstim_req=1200       # max wallclock time
#PBS -l cputim_job=600          # max accumulated cputime per node
#PBS -l cputim_prc=599          # max accumulated cputime per node
#PBS -l memsz_job=16gb          # memory per node
#PBS -A <acctcode>              # Your Account code, see login message, without <>
#PBS -j o                       # join stdout/stderr
#PBS -T mpisx                   # Job type: mpisx for MPI
#PBS -N MyJob                   # job name
#PBS -M MyMail@mydomain         # you should always specify your emai

Contents of Job

A typical job will create a workspace, copy some data, run the application and save some data at the end.

Multithreaded job

ws=`ws_allocate myimportantdata 10`    # get a workspace for 10 days
cd $ws                                 # go there
cp $HOME/input/file.dat .              # get some data
export OMP_NUM_THREADS=8               # use 8 OMP threads
export F_PROGINF=DETAIL                # get some performance information after the run
$HOME/bin/myApp                        # run my application
cp output.dat $HOME/output

MPI job

ws=`ws_allocate myimportantdata 10`    # get a workspace for 10 days
cd $ws                                 # go there
cp $HOME/input/file.dat .              # get some data
export MPIPROGINF=DETAIL
mpirun -nn 2 -nnp 16 $HOME/bin/myApp   # run my application on 2 nodes, 16 CPUs each (32 total)
cp output.dat $HOME/output

Hybrid OpenMP and MPI job

SCR=`ws_allocate MyWorkspace 2`    
cd $SCR
export OMP_NUM_THREADS=16              # 16 threads
export MPIPROGINF=YES
export MPIMULTITASKMIX=YES
MPIEXPORT="OMP_NUM_THREADS"  # make this environment known to all nodes
export MPIEXPORT
mpirun -nn 4 -nnp 1 $HOME/bin//mycode  # run on 4 nodes using 1 process per node, but 16 threads
cp outfile $HOME/output