- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -
Batch System PBSPro (Hunter)
Introduction
The only way to start a job (parallel or single node) on the compute nodes of this system is to use the batch system. The installed batch system is PBSPro.
Writing a submission script is typically the most convenient way to submit your job to the batch system. You generally interact with the batch system in two ways: through options specified in job submission scripts (these are detailed below in the examples) and by using PBSPro commands on the login nodes. There are three key commands used to interact with PBSPro:
- qsub
- qstat
- qdel
Check the man page of PBSPro for more advanced commands and options with
man pbs_professional
Requesting Resources with the batch system
Resources are allocated to jobs both by explicitly requesting them and by applying specified defaults.
Jobs explicitly request resources either at the host level in chunks defined in a selection statement, or in job-wide resource requests.
- job wide request:
qsub ... -l <resource name>=<value>
The only resources that can be in a job-wide request are server-level or queue-level resources, such as walltime.
- selection statement:
qsub ... -l select=<chunks>
The only resources that can be requested in chunks are host-level resources, such as node_type and ncpus. A chunk is the smallest set of resources that will be allocated to a job. The select is one or more resource_name=value statements separated by a colon, e.g.:
ncpus=2:node_type=mi300a
A selection statement is of the form:
-l select=[N:]chunk[+[N:]chunk ...]
Note: If N is not specified, it is taken to be 1. No spaces are allowed between chunks.
Format:
Node types
You have to specify the resources you need for your batch job. These resources are specified by including them in the -l argument (selection statement and job-wide resources) on the qsub command or in the PBS job script. The 2 important resources you have to specify are number of nodes of a specific node type in the selection statement and the walltime in the job-wide resource request you need for this job:
-l select=<number of nodes>:<node_resource_variable=type>:<node_resource_variable=type>... -l walltime=<time>
- To distinguish between different nodes there are node resource variables assigned to each node. The node_type, node_type_cpu, node_type_mem and node_type_core of each type of node. On Hunter you have to specify at least the node_type' resource variable for the CPU nodes or APU nodes. It is also possible specifying a valid combination of the resources for a specific type of nodes.
node_type | node_type_cpu | node_type_mem | node_type_core | description | notes | # of nodes |
mi300a | AMD_Instinct_MI300A_Accelerator | 512gb | 96c | APU nodes (compute) | available on the default queue and queue test | 136 |
genoa | AMD_EPYC_9374F | 768gb | 64c | CPU nodes (compute) | available on the default queue and queue test | 256 |
genoa3tb64c | AMD_EPYC_9354 | 3tb | 64c | HPE (Pre-Postprocessing) | only available on queue pre | 4 |
genoa6tb64c | AMD_EPYC_9354 | 6tb | 64c | HPE (Pre-Postprocessing) | only available on queue smp (you need additional to specify ncpus and mem) | 1 |
A job request for APU nodes will be specified by:
select=16:node_type=mi300a
The example above will allocate 16 APU nodes with the AMD Instinct MI300A Accelerators.