- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -
Batch System PBSPro (Hunter): Difference between revisions
Line 61: | Line 61: | ||
| genoa3tb64c ||AMD_EPYC_9354||3tb||64c||HPE (Pre-Postprocessing)||only available on special queue <tt>'''pre'''</tt> ||4 | | genoa3tb64c ||AMD_EPYC_9354||3tb||64c||HPE (Pre-Postprocessing)||only available on special queue <tt>'''pre'''</tt> ||4 | ||
|- | |- | ||
| genoa6tb64c ||AMD_EPYC_9354||6tb||64c||HPE (Pre-Postprocessing)||only available on special queue <tt>'''smp'''</tt> (you need additional to specify <tt>'''ncpus'''</tt> and <tt>'''mem'''</tt>). This node can be shared by multiple jobs/users!||1 | | genoa6tb64c ||AMD_EPYC_9354||6tb||64c||HPE (Pre-Postprocessing)||only available on special queue <tt>'''smp'''</tt> (you need additional to specify <tt>'''ncpus'''</tt> and <tt>'''mem'''</tt>). <font color=red>This node can be shared by multiple jobs/users!</font>||1 | ||
|- | |- | ||
|} | |} |
Revision as of 13:55, 7 January 2025
Introduction
The only way to start a job (parallel or single node) on the compute nodes of this system is to use the batch system. The installed batch system is PBSPro.
Writing a submission script is typically the most convenient way to submit your job to the batch system. You generally interact with the batch system in two ways: through options specified in job submission scripts (details can be found below in the examples) and by using PBSPro commands on the login nodes. There are three key commands used to interact with PBSPro:
- qsub
- qstat
- qdel
Check the man page of PBSPro for more advanced commands and options with
man pbs_professional
Requesting Resources with the batch system
Resources are allocated to jobs both by explicitly requesting them and by applying specified defaults.
Jobs explicitly request resources either at the host level in chunks defined in a selection statement, or in job-wide resource requests.
- job wide request:
qsub ... -l <resource name>=<value>
The only resources that can be in a job-wide request are server-level or queue-level resources, such as walltime.
- selection statement:
qsub ... -l select=<chunks>
The only resources that can be requested in chunks are host-level resources, such as node_type and ncpus. A chunk is the smallest set of resources that will be allocated to a job. The select is one or more resource_name=value statements separated by a colon, e.g.:
ncpus=2:node_type=mi300a
A selection statement is of the form:
-l select=[N:]chunk[+[N:]chunk ...]
Note: If N is not specified, it is taken to be 1. No spaces are allowed between chunks.
Format:
Node types
On hunter are currently 4 different node types installed. The main compute node type are the APU nodes with AMD Instinct MI300A accelerators. Additional there are also some CPU nodes available. 4 special nodes with more memory are available for special pre- and post-processing tasks. And a special node with a very large memory is also available by using a special queue. This single (smp-) node will be shared by multiple jobs/users at the same time.
So, you have to specify the resources you need for your batch jobs. These resources are specified by including them in the -l argument (selection statement and job-wide resources) on the qsub command or in the PBS job script.
The 2 important resources you have to specify are number of nodes of a specific node type in the selection statement and the walltime in the job-wide resource request you need for this job:
-l select=<number of nodes>:<node_resource_variable=type>:<node_resource_variable=type>... -l walltime=<time>
- To distinguish between different nodes there are node resource variables assigned to each node. The node_type, node_type_cpu, node_type_mem and node_type_core of each type of node. On Hunter you have to specify at least the node_type' resource variable for the CPU nodes or APU nodes. It is also possible specifying a valid combination of the resources for a specific type of node (the node_type_cpu, node_type_mem and node_type_core are only included for reasons of compatibility with the other more heterogenous clusters).
node_type | node_type_cpu | node_type_mem | node_type_core | description | notes | # of nodes |
mi300a | AMD_Instinct_MI300A_Accelerator | 512gb | 96c | APU nodes (compute) | available on the default queues (without specifying a queue) and queue test | 136 |
genoa | AMD_EPYC_9374F | 768gb | 64c | CPU nodes (compute) | available on the default queues (without specifying a queue) and queue test | 256 |
genoa3tb64c | AMD_EPYC_9354 | 3tb | 64c | HPE (Pre-Postprocessing) | only available on special queue pre | 4 |
genoa6tb64c | AMD_EPYC_9354 | 6tb | 64c | HPE (Pre-Postprocessing) | only available on special queue smp (you need additional to specify ncpus and mem). This node can be shared by multiple jobs/users! | 1 |
A job request for APU nodes will be specified by:
qsub -l select=16:node_type=mi300a, -l walltime=1:00:00
The example above will allocate 16 APU nodes with the AMD Instinct MI300A Accelerators for 1 hour.