- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -
Batch system: Difference between revisions
From HLRS Platforms
Jump to navigationJump to search
No edit summary |
No edit summary |
||
Line 47: | Line 47: | ||
#PBS -N MyJob # job name | #PBS -N MyJob # job name | ||
#PBS -M MyMail@mydomain # you should always specify your emai | #PBS -M MyMail@mydomain # you should always specify your emai | ||
== Contents of Job == | |||
A typical job will create a workspace, copy some data, run the application and save some data at the end. | |||
=== Multithreaded job === | |||
ws=`ws_allocate myimportantdata 10` # get a workspace for 10 days | |||
cd $ws # go there | |||
cp ~/input/file.dat . # get some data | |||
export OMP_NUM_THREADS=8 # use 8 OMP threads | |||
export F_PROGINF=DETAIL # get some performance information after the run | |||
~/bin/myApp # run my application | |||
cp output.dat ~/output | |||
=== MPI job === | |||
ws=`ws_allocate myimportantdata 10` # get a workspace for 10 days | |||
cd $ws # go there | |||
cp ~/input/file.dat . # get some data | |||
export MPIPROGINF=DETAIL | |||
mpirun -nn 2 -nnp 16 ~/bin/myApp # run my application on 2 nodes, 16 CPUs each (32 total) | |||
cp output.dat ~/output |
Revision as of 10:56, 27 November 2008
Job Examples
Job sample large job, will be executed in '?multi' on v901-v907
#PBS -q dq #PBS -l cpunum_job=16 # cpus per Node #PBS -b 2 # number of nodes, max 4 at the moment #PBS -l elapstim_req=12:00:00 # max wallclock time #PBS -l cputim_job=192:00:00 # max accumulated cputime per node #PBS -l cputim_prc=11:55:00 # max accumulated cputime per node #PBS -l memsz_job=500gb # memory per node #PBS -A <acctcode> # Your Account code, see login message, without <> #PBS -j o # join stdout/stderr #PBS -T mpisx # Job type: mpisx for MPI #PBS -N MyJob # job name #PBS -M MyMail@mydomain # you should always specify your emai
Job sample small job, will be executed in '?single' on v900 in shared mode, other jobs will run on same node.
#PBS -q dq #PBS -l cpunum_job=8 # cpus per Node #PBS -b 1 # number of nodes #PBS -l elapstim_req=12:00:00 # max wallclock time #PBS -l cputim_job=192:00:00 # max accumulated cputime per node #PBS -l cputim_prc=11:55:00 # max accumulated cputime per node #PBS -l memsz_job=64gb # memory per node #PBS -A <acctcode> # Your Account code, see login message, without <> #PBS -j o # join stdout/stderr #PBS -T mpisx # Job type: mpisx for MPI #PBS -N MyJob # job name #PBS -M MyMail@mydomain # you should always specify your emai
Job sample test job, will be executed in 'test', always one job running, no matter how loaded node is.
#PBS -q dq #PBS -l cpunum_job=4 # cpus per Node #PBS -b 1 # number of nodes #PBS -l elapstim_req=1200 # max wallclock time #PBS -l cputim_job=600 # max accumulated cputime per node #PBS -l cputim_prc=599 # max accumulated cputime per node #PBS -l memsz_job=16gb # memory per node #PBS -A <acctcode> # Your Account code, see login message, without <> #PBS -j o # join stdout/stderr #PBS -T mpisx # Job type: mpisx for MPI #PBS -N MyJob # job name #PBS -M MyMail@mydomain # you should always specify your emai
Contents of Job
A typical job will create a workspace, copy some data, run the application and save some data at the end.
Multithreaded job
ws=`ws_allocate myimportantdata 10` # get a workspace for 10 days cd $ws # go there cp ~/input/file.dat . # get some data export OMP_NUM_THREADS=8 # use 8 OMP threads export F_PROGINF=DETAIL # get some performance information after the run ~/bin/myApp # run my application cp output.dat ~/output
MPI job
ws=`ws_allocate myimportantdata 10` # get a workspace for 10 days cd $ws # go there cp ~/input/file.dat . # get some data export MPIPROGINF=DETAIL mpirun -nn 2 -nnp 16 ~/bin/myApp # run my application on 2 nodes, 16 CPUs each (32 total) cp output.dat ~/output