- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

Batch system

From HLRS Platforms
Revision as of 10:46, 27 November 2008 by Hwwnec5 (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

Job Examples

Job sample large job, will be executed in '?multi' on v901-v907

#PBS -q dq
#PBS -l cpunum_job=16           # cpus per Node
#PBS -b 2                       # number of nodes, max 4 at the moment
#PBS -l elapstim_req=12:00:00   # max wallclock time
#PBS -l cputim_job=192:00:00    # max accumulated cputime per node
#PBS -l cputim_prc=11:55:00     # max accumulated cputime per node
#PBS -l memsz_job=500gb         # memory per node
#PBS -A <acctcode>              # Your Account code, see login message, without <>
#PBS -j o                       # join stdout/stderr
#PBS -T mpisx                   # Job type: mpisx for MPI
#PBS -N MyJob                   # job name
#PBS -M MyMail@mydomain         # you should always specify your emai

Job sample small job, will be executed in '?single' on v900 in shared mode, other jobs will run on same node.

#PBS -q dq
#PBS -l cpunum_job=8            # cpus per Node
#PBS -b 1                       # number of nodes
#PBS -l elapstim_req=12:00:00   # max wallclock time
#PBS -l cputim_job=192:00:00    # max accumulated cputime per node
#PBS -l cputim_prc=11:55:00     # max accumulated cputime per node
#PBS -l memsz_job=64gb          # memory per node
#PBS -A <acctcode>              # Your Account code, see login message, without <>
#PBS -j o                       # join stdout/stderr
#PBS -T mpisx                   # Job type: mpisx for MPI
#PBS -N MyJob                   # job name
#PBS -M MyMail@mydomain         # you should always specify your emai

Job sample test job, will be executed in 'test', always one job running, no matter how loaded node is.

#PBS -q dq
#PBS -l cpunum_job=4            # cpus per Node
#PBS -b 1                       # number of nodes
#PBS -l elapstim_req=1200       # max wallclock time
#PBS -l cputim_job=600          # max accumulated cputime per node
#PBS -l cputim_prc=599          # max accumulated cputime per node
#PBS -l memsz_job=16gb          # memory per node
#PBS -A <acctcode>              # Your Account code, see login message, without <>
#PBS -j o                       # join stdout/stderr
#PBS -T mpisx                   # Job type: mpisx for MPI
#PBS -N MyJob                   # job name
#PBS -M MyMail@mydomain         # you should always specify your emai