- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -
Batch System PBSPro (vulcan)
Introduction
The only way to start a parallel job on the compute nodes of this system is to use the batch system. The installed batch system is based on PBSPro.
Writing a submission script is typically the most convenient way to submit your job to the batch system. You generally interact with the batch system in two ways: through options specified in job submission scripts (these are detailed below in the examples) and by using torque or moab commands on the login nodes. There are three key commands used to interact with torque:
- qsub
- qstat
- qdel
Check the man page of torque for more advanced commands and options
man pbs_professional
Requesting Resources using batch system
You have to specify the resources you need for your batch job. These resources are specified by including them in the -l option argument on the qsub command or in the PBS job script. The 2 important resources you have to specify are number of nodes of a specific node type and the walltime you need for this job:
select=<number of nodes>:<node_resource_variable=type>
walltime=<time>
- To distinguish between different nodes 4 node resource variables are assigned to each node. The node_type, node_type_cpu, node_type_mem and node_type_core of each node. You have to specify at least one of the resource variable or you can specify a valid available combination of the resources for a specific type of nodes.
node_type | node_type_cpu | node_type_mem | node_type_core | Graphic | localscratch | linkspeed | describes | notes | # of nodes (laki) | # of nodes (laki2) |
sb | SandyBridge@2.60GHz | 32gb | 16c | QDR | cpu type intel sandy bridge, 32GB memory | 2 octa core-CPU per node | 98 | 178 | ||
sb | SandyBridge@2.60GHz | 64gb | 16c | QDR | cpu type intel sandy bridge, 64GB memory | 2 octa core-CPU per node | 6 | 10 | ||
sb | SandyBridge@2.60GHz | 64gb | 16c | FDR | cpu type intel sandy bridge, 64GB memory | 2 octa core-CPU per node | 24 | 0 | ||
sb | SandyBridge@2.60GHz | 128gb | 16c | QDR | cpu type intel sandy bridge, 128GB memory | 2 octa core-CPU per node | 0 | 4 (1 shared) | ||
hsw | Haswell@2.60GHz | 128gb | 20c | QDR | cpu type intel haswell, 128GB memory | 2 x 10 core-CPU 2.6 ghz per node | 84 | 0 | ||
hsw | Haswell@2.60GHz | 256gb | 20c | QDR | cpu type intel haswell, 256GB memory | 2 x 10 core-CPU 2.6 ghz per node | 4 | 0 | ||
hsw | Haswell@2.50GHz | 128gb | 24c | QDR | cpu type intel haswell, 128GB memory | 2 x 12 core-CPU 2.5 ghz per node | 200 | 0 | ||
hsw | Haswell@2.50GHz | 128gb | 24c | FDR | cpu type intel haswell, 128GB memory | 2 x 12 core-CPU 2.5 ghz per node | 144 | 0 | ||
hsw | Haswell@2.50GHz | 256gb | 24c | QDR | cpu type intel haswell, 256GB memory | 2 x 12 core-CPU 2.5 ghz per node | 16 | 0 | ||
il | Interlagos@2.6GHz | 256gb | 48c | QDR | interlagos node with 256GB memory | 6 | 0 | |||
il | Interlagos@2.6GHz | 256gb | 48c | 4TB | QDR | interlagos node with 256GB memory, 4TB local scratch disk | 4 | 0 | ||
ib | IvyBridge@3.3Ghz | 384gb | 16c | Tesla K20Xm | 11TB | QDR | node with intel IvyBridge@3.3Ghz, 16 cores, 384GB memory, 11TB local SSD scratch disk, Tesla K20Xm | only for single node jobs available | 3 | 0 |
nh | Nehalem_EX@2.67GHz | 1024gb | 48c | SDR | cpu type intel nehalem, 1TByte memory,8 socket 6 core CPU's | will be shared with other jobs! Please use "qsub -q smp -l select=1:node_type=nh ..." | 1 (shared) | 0 | ||
nh | Nehalem@2.8GHz | 144gb | 8c | 6TB | SDR | cpu type intel nehalem, 148GB memory, 6TB local scratch | 2 octa core-CPU per node + local scratch disk | 1 | 0 | |
fx5800 | Nehalem@2.93GHz | 24gb | 4c | Quadro FX 5800 | SDR | Graphic node Nvidia Quadro FX 5800, 8 core intel W3540, 24GB memory | only 1 node per job! Please use "qsub -q vis -l select=1:node_type=fx5800 ..." | 4 | 1 | |
gtx680 | Nehalem@2.53GHz | 12gb | 4c | GTX680 | SDR | Cuda Node with GTX680, 8 core intel E5540, 12GB memory | only 1 node per job! | 2 | 0 |
Multi node type job can also be specified using a +:
select=1:node_type=hsw:node_type_mem=256gb+3:node_type=hsw:node_type_mem=128gb:node_type_core=20c
The example above will allocate 1 hsw node (a 20 core ore 24 core type) with 256 GB memory and 3 hsw nodes (the 20 cores type) with 128 GB memory.
To allocate special nodes with local disk you can use the node resource variable localscratch:
select=1:node_type=il:localscratch=4TB
Batch Mode
Production jobs are typically run in batch mode. Batch scripts are shell scripts containing flags and commands to be interpreted by a shell and are used to run a set of commands in sequence.
- The number of required nodes, cores, wall time and more can be determined by the parameters in the job script header with "#PBS" before any executable commands in the script.
#!/bin/bash #PBS -N job_name #PBS -l select=2:node_type=hsw:mpiprocs=24 #PBS -l walltime=00:20:00 # Change to the direcotry that the job was submitted from cd $PBS_O_WORKDIR module load mpi/your_mpi_version_for_your_application # Launch the parallel job to the allocated compute nodes mpirun opt1 opt2 ./my_mpi_executable arg1 arg2 > my_output_file 2>&1
- The job is submitted by the qsub command (all script head parameters #PBS can also be adjusted directly by qsub command options).
qsub my_batchjob_script.pbs
- Setting qsub options on the command line will overwrite the settings given in the batch script:
qsub -N other_name -l select=2:node_type=hsw:mpiprocs=24 -l walltime=00:20:00 my_batchjob_script.pbs
- The batch script is not necessarily granted resources immediately, it may sit in the queue of pending jobs for some time before its required resources become available.
- At the end of the execution output and error files are returned to your HOME directory
- This example will run your executable "my_mpi_executable" in parallel with 48 MPI processes. The batch system will allocate 2 nodes to your job for a maximum time of 20 minutes and place 24 processes on each node. The batch systems allocates nodes exclusively only for one job. After the walltime limit is exceeded, the batch system will terminate your job. The mpirun example above will start the parallel executable "my_mpi_executable" with the arguments "arg1" and "arg2". The job will be started using 48 MPI processes with 24 processes placed on each of your allocated nodes. You need to have nodes allocated by the batch system (qsub) before starting mpirun.
Interactive batch Mode
Submit a batch job
You will get each requested node for your exclusive usage. There are 2 methods to use the batch system:
- interactive batch jobs:
-
if requested resources are available, the job starts a interactive shell immediately. For interactive access the qsub command has the option -I example:
qsub -I ...
- normal batch jobs:
-
jobs will be started by the MOAB scheduler after passing some rules configured by the administrator (FAIRSHARE, BACKFILLING, ...).
Command for submitting a batch job request
A short explanation follows here. For detailed information, see the man pages or the latest documentation in the WWW.
man qsub man pbs_resources man pbs
Command to submit a batch job
qsub <option>
On success, the qsub command returns a request ID.
You have to specify the resources you need for your batch job. These resources are specified by including them in the -l option argument on the qsub command or in the PBS job script. There are 2 important resources you need to specify:
nodes=<number of nodes>:<feature>
walltime=<time>
- To distinguish between different nodes, features are assigned to each node. These features describe the properties of each node. Please do only use exactly 1 feature for each node type.
node_type | node_type_cpu | node_type_mem | node_type_core | graphic | lokalescratch | linkspeed | describes | notes | '# of nodes (laki) | # of nodes (laki2) |
nehalem, mem12gb | No longer available ! | |||||||||
sb, mem32gb | cpu type intel sandy bridge, 32GB memory | 2 octa core-CPU per node | 98 | 178 | ||||||
hsw128gb10c | cpu type intel haswell, 128GB memory | 2 x 10 core-CPU 2.6 ghz per node | 84 | 0 | ||||||
hsw256gb10c | cpu type intel haswell, 256GB memory | 2 x 10 core-CPU 2.6 ghz per node | 4 | 0 | ||||||
hsw128gb12c | cpu type intel haswell, 128GB memory | 2 x 12 core-CPU 2.5 ghz per node | 344 | 0 | ||||||
hsw256gb12c | cpu type intel haswell, 256GB memory | 2 x 12 core-CPU 2.5 ghz per node | 16 | 0 | ||||||
tesla | No longer available ! | |||||||||
mem64gb | sb node with 64GB memory | 30 | 10 | |||||||
mem128gb | sb node with 128GB memory | 0 | 4 (1 shared) | |||||||
il | Interlagos@2.6GHz | 256gb | 48c | QDR | interlagos node with 256GB memory | 6 | 0 | |||
il | Interlagos@2.6GHz | 256gb | 48c | 4TB | QDR | interlagos node with 256GB memory, 4TB local scratch disk | 4 | 0 | ||
mem384gb, scratch11tb,k20xm | node with intel IvyBridge@3.3Ghz, 16 cores, 384GB memory, 11TB local SSD scratch disk, Tesla K20Xm | only for single node jobs available | 3 | 0 | ||||||
smp | cpu type intel nehalem, 1TByte memory,8 socket 6 core CPU's | will be shared with other jobs! Please use "qsub -q smp -l nodes=1:smp..." | 1 | |||||||
mem144gb, scratch6tb, scratch2tb | cpu type intel nehalem, 148GB memory, 2TB local scratch (1 nodes) or 6TB local scratch (1 nodes) | 2 octa core-CPU per node + local scratch disk | 1 | |||||||
vis | Graphic node Nvidia Quadro FX 5800, 8 core intel W3540, 24GB memory | only 1 node per job! Please use "qsub -q vis -l nodes=1:vis..." | 3 | 1 | ||||||
gtx680 | Cuda Node with GTX680, 8 core intel E5540, 12GB memory | only 1 node per job! | 2 | 0 |
nodes=2:tesla+3:nehalemThe example above will allocate 2 nodes with feature tesla and 3 nodes with feature nehalem.
Usage of multi-socket nodes and multi-core cpus
The batch system takes into account the number of cpus and nodes (summarized PE) for each node when assigning resources.
The resource request feature nodes. If the request has no option for the number of PE per node (ppn) set, then the system assumes that the requested nodes is equal the number of PE's. The batch system allocates a number of nodes that in total fulfill the sum of requested PE.
Example:
qsub -l nodes=2:nehalem ./myscript
The batch system allocates two node of feature nehalem. The file ${PBS_NODEFILE} contains:
node1 node2
The ressource request feature nodes, Option ppn. If the request has the option ppn defined, then it is possible to allocate the defined PE's on a node. This option especially allow OpenMPI to place the MPI processes of ranks on a shared node or alternatively on distributed nodes.
Example:
qsub -l nodes=2:nehalem:ppn=2+1:nehalem:ppn=3 ./myscript
The batch system allocates 2 nodes of feature nehalem each for 2 PE's and 1 node of feature nehalem for 3 PE's. Then the file ${PBS_NODEFILE} contains:
node1 node1 node2 node2 node3 node3 node3
The resource request feature nodes, option pmem. For special applications it will be useful to allocate only 1 PE per node. In this case you need a little trick, because the batch system handle the option ppn=1 like the requests without this option (see the first example). Therefore you have to define an additional option in your request. A simple way to do this is to request the maximum of the node's memory.
Example:
qsub -l nodes=2:nehalem,pmem=11gb ./myscript
The batch system allocates 2 nodes of feature nehalem. Each of the nodes with feature nehalem have 12GByte RAM installed, so the batch system is constrained to allocate 2 nodes. And the file ${PBS_NODEFILE} should look like this:
node1 node2
Defaults for Ressource Requests
If you don't set the resources for your job request, then you will get default resource limits for your job.
feature | value | notes |
walltime | 00:10:00 | |
nodes | 1 | |
ppn | 1 |
Please select your resource requests carefull. The higher your specified resource limits the lower the job priority. See also NEC_Cluster_QueuePolicies_for_(laki_+_laki2).
To have the same environmental settings (exported environment) of your current session in your batchjob, the qsub command needs the option argument -V.
Run job on other Account ID
There are Unix groups associated to the project account ID (ACID). To run a job on a non-default project budget, the groupname of this project has to be passed in the group_list:
qsub -l nodes=1:nehalem -W group_list=<groupname>
Usage of a Reservation
For nodes which are reserved for special groups or users, you need to specify an additional option for this reservation:
- E.g. a reservation named john.1 will be used with following command:
qsub -l nodes=1:nehalem,walltime=1:00 -W x=FLAGS:ADVRES:john.1 testjob.cmd
Job Arrays
Job arrays are groups of simlar jobs. Those jobs usually have slightly different parameters which depend on the current job index. This job index will be available in the $PBS_ARRAYID variable, which can be used in job scripts to calculate or generate any kind of job-specific (input)data.
Job arrays can be requested with '-t <range>[%<count_of_parallel_processes>]'.
The range is specifed as list of comma separated values. The values may be individual integers as well as integer ranges, spaces are not allowed. A count of parallel running processes can be requested. It must be specificed as last in the array request and is delimited from the array range by a percent sign (%). For further details see the documentation.
In order to request a job array with job IDs 0,9,1000-1017 and restrict parallel running jobs to two you can use the following:
qsub -t 1,9,1000-1017%2 myJobScript
or inside the job script
#!/bin/bash #PBS -l nodes=1,walltime=00:01:00 #PBS -t 1,9,1000-1017%2 # Replace the following line with your own code echo "This is job with id '$PBS_ARRAYID' running on compute node '`hostname`'"; exit 0;
In case the job id returned by qsub was '23' then the job array will result in showq-output similar to the following:
active jobs------------------------ JOBID USERNAME STATE PROCS REMAINING STARTTIME 23[1] testuser Running 1 00:00:57 Thu Dec 20 17:38:10 23[9] testuser Running 1 00:00:57 Thu Dec 20 17:38:10 2 active jobs 2 of 3 processors in use by local jobs (66.67%) eligible jobs---------------------- JOBID USERNAME STATE PROCS WCLIMIT QUEUETIME 0 eligible jobs blocked jobs----------------------- JOBID USERNAME STATE PROCS WCLIMIT QUEUETIME 23[1000] testuser Hold 1 00:01:00 Thu Dec 20 17:35:39 23[1001] testuser Hold 1 00:01:00 Thu Dec 20 17:35:39 23[1002] testuser Hold 1 00:01:00 Thu Dec 20 17:35:39 23[1003] testuser Hold 1 00:01:00 Thu Dec 20 17:35:39 23[1004] testuser Hold 1 00:01:00 Thu Dec 20 17:35:39 .. 23[1017] testuser Hold 1 00:01:00 Thu Dec 20 17:35:39
Examples
Examples for PBS options in job scripts
You can submit batch jobs using qsub. A very simple qsub script for a MPI job with PBS (Torque) directives (#PBS ...) for the options of qsub looks like this:
#!/bin/bash # # Simple PBS batch script that reserves two cpus and runs one # MPI process on each node) # The default walltime is 10min ! # #PBS -l nodes=2:nehalem cd $HOME/testdir mpirun -np 2 -hostfile $PBS_NODEFILE ./mpitest
VERY important is that you specify a shell in the first line of your batch script.
VERY important in case you use the openmpi module is to omit the -hostfile option. Otherwise an error will occur like
[n110402:02618] pls:tm: failed to poll for a spawned proc, return status = 17002 [n110402:02618] [0,0,0] ORTE_ERROR_LOG: In errno in file ../../../../../orte/mca/rmgr/urm/rmgr_urm.c at line 462 [n110402:02618] mpirun: spawn failed with errno=-11
If you want to use two MPI processes on each node this can be done like this:
#!/bin/bash # # Simple PBS batch script that reserves two nodes and runs a # MPI program on four processors (two on each node) # The default walltime is 10min ! # #PBS -l nodes=2:nehalem:ppn=2 cd $HOME/testdir mpirun -np 4 -hostfile machines ./mpitest
If you need 2h wall time and one node you can use the following script:
# # Simple PBS batch script that runs a scalar job using 2h # #PBS -l nodes=1:nehalem,walltime=2:00:00 cd $HOME/jobdir ./my_executable
Examples for starting batch jobs:
-
Starting a script with all options specified inside the script file
qsub <script>
-
Starting a script using 3 nodes and a real time of 2 hours:
qsub -l nodes=3:nehalem,walltime=2:00:00 <script>
-
Starting a script using 5 cluster nodes using 4 processors on each node with PBS Feature nehalem:
qsub -l nodes=5:nehalem:ppn=4,walltime=2:00:00 <script>
-
Starting a script using 1 cluster node with PBS Feature mem24gb and 5 processors with PBS Feature mem12gb and real job time of 1.5 hours:
qsub -l nodes=1:mem24gb+5:mem12gb,walltime=1:30:00 <script>
-
Starting a interactive batch job using 5 processors with a job real time of 300 seconds:
qsub -I -l nodes=5:nehalem,walltime=300
For interactive Batch jobs, you don't need a script.sh file. If the requested resources are available, you will get an interactive shell on one of the allocated compute nodes. Which nodes are allocated can be shown with the command cat $PBS_NODEFILE on the batch job shell or with the PBS status command qstat -n on the master node.
You can log in from the frontend or any assigned node to all other assigned nodes byssh <nodename>
If you exit the automatically established interactive shell to the node, it will be assumed that you finished your job and all other connections to the nodes will be terminated.
-
Possibilities to request 4 cluster nodes (with feature 'nehalem' i.e. 32 processors).
qsub -l nodes=4:nehalem:ppn=8 <script>
qsub -l nodes=4:nehalem:pmem=11gb <script>
The difference between both job submisson kinds is in the content of PBS_NODEFILE.
-
Starting a script which should run on other Account ID:
First you have to know which Account ID's (groupnames) are valid for your login:
id
Choose a valid groupname for your job (abc12345 will serve as a placeholder here):
qsub -l nodes=5:nehalem,walltime=300 -W group_list=abc12345
Get the batch job status
Please, don't run the commands qstat or showq in a iteration loop using e.g. watch. Those commands will block all other users. You can use nstat in place of qstat or showq. This command uses a SQL database which will be updated every 3 minutes.
qstat [options] showq [options] (showq -h for details) nstat
For detailed informations, see man pages:
nstat -h man qstat man pbsnodes
-
list all batch jobs:
qstat -a
lists all batch queues with resource limit settings:
qstat -q
lists node information of a batch job ID:
qstat -n <JOB_ID>
lists detailed information of a batch job ID:
qstat -f <JOB_ID>
lists information of the PBS node status:
pbsnodes -a pbsnodes -l
gives informatioin of PBS node and job status:
nstat
DISPLAY: X11 applications on interactive batch jobs
For X11 applications you need to have SSH X11 Forwarding enabled. This is usually activated per default. But to be sure you can set 'ForwardX11 yes' in your $HOME/.ssh/config. To have the same DISPLAY of your current session in your batchjob, the qsub command needs the option argument -X.
frontend> qsub -l nodes=2:nehalem,walltime=300 -X -I