- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -
Batch System PBSPro (Hawk)
Introduction
The only way to start a job (parallel or single node) on the compute nodes of this system is to use the batch system. The installed batch system is PBSPro.
Writing a submission script is typically the most convenient way to submit your job to the batch system. You generally interact with the batch system in two ways: through options specified in job submission scripts (these are detailed below in the examples) and by using PBSPro commands on the login nodes. There are three key commands used to interact with PBSPro:
- qsub
- qstat
- qdel
Check the man page of PBSPro for more advanced commands and options with
Requesting Resources with the batch system
Resources are allocated to jobs both by explicitly requesting them and by applying specified defaults.
Jobs explicitly request resources either at the host level in chunks defined in a selection statement, or in job-wide resource requests.
- job wide request:
qsub ... -l <resource name>=<value>
The only resources that can be in a job-wide request are server-level or queue-level resources, such as walltime.
- selection statement:
qsub ... -l select=<chunks>
The only resources that can be requested in chunks are host-level resources, such as mem and ncpus. A chunk is the smallest set of resources that will be allocated to a job. The select It is one or more resource_name=value statements separated by a colon, e.g.:
ncpus=2:mem=128GB
A selection statement is of the form:
-l select=[N:]chunk[+[N:]chunk ...]
Note: If N is not specified, it is taken to be 1. No spaces are allowed between chunks.
Format:
Node types
You have to specify the resources you need for your batch job. These resources are specified by including them in the -l argument (selection statement and job-wide resources) on the qsub command or in the PBS job script. The 2 important resources you have to specify are number of nodes of a specific node type in the selection statement and the walltime in the job-wide resource request you need for this job:
select=<number of nodes>:<node_resource_variable=type>
walltime=<time>
- To distinguish between different nodes 4 node resource variables are assigned to each node. The node_type, node_type_cpu, node_type_mem and node_type_core of each node. You have to specify at least one of the resource variable or you can specify a valid available combination of the resources for a specific type of nodes.
node_type | node_type_cpu | node_type_mem | node_type_core | description | notes | # of nodes |
rome | AMD_EPYC_7742 | 256gb | 128c | HPE Apollo9000 (compute) | 4096 | |
rome | AMD_EPYC_7702 | 2048gb | 128c | HPE (Pre-Postprocessing) | only available on queue smp (you need also to specify ncpus and mem) | 4 |
rome | AMD_EPYC_7702 | 4096gb | 128c | HPE (Pre-Postprocessing) | only available on queue smp (you need also to specify ncpus and mem) | 1 |
rome-ai | AMD_EPYC_7702 | 1024gb | 128c | HPE Apollo 6500 Gen10 Plus (AI) | 24 | |
nv-a100-40gb | AMD_EPYC_7702 | 1024gb | 128c | HPE Apollo 6500 Gen10 Plus (AI) | 20 | |
nv-a100-80gb | AMD_EPYC_7702 | 1024gb | 128c | HPE Apollo 6500 Gen10 Plus (AI) | 4 |
A compute node type job will be specified by:
select=64:node_type=rome:node_type_mem=256gb
The example above will allocate 64 compute node with 256 GB memory.
Batch Mode
Production jobs are typically run in batch mode. Batch scripts are shell scripts containing flags and commands to be interpreted by a shell and are used to run a set of commands in sequence.
- The number of required nodes, cores, wall time and more can be determined by the parameters in the job script header with "#PBS" before any executable commands in the script.
#!/bin/bash #PBS -N job_name #PBS -l select=2:node_type=rome:mpiprocs=128 #PBS -l walltime=00:20:00 # Change to the direcotry that the job was submitted from cd $PBS_O_WORKDIR # Launch the parallel mpi application (compiled with intel mpi) to the allocated compute nodes mpirun -np 256 ./my_mpi_executable arg1 arg2 > my_output_file 2>&1
- The job is submitted by the qsub command (all script head parameters #PBS can also be adjusted directly by qsub command options).
qsub my_batchjob_script.pbs
- Setting qsub options on the command line will overwrite the settings given in the batch script:
qsub -N other_name -l select=2:node_type=rome:mpiprocs=128 -l walltime=00:20:00 my_batchjob_script.pbs
- The batch script is not necessarily granted resources immediately, it may sit in the queue of pending jobs for some time before its required resources become available.
- At the end of the execution output and error files are returned to your HOME directory
- This example will run your executable "my_mpi_executable" in parallel with 256 MPI processes (mpiprocs=128 is the number of MPI processes on each node) . The batch system will allocate 2 nodes to your job for a maximum time of 20 minutes and place 128 processes on each node. The batch systems allocates nodes exclusively only for one job. After the walltime limit is exceeded, the batch system will terminate your job. The mpirun example above will start the parallel executable "my_mpi_executable" with the arguments "arg1" and "arg2". The job will be started using 256 MPI processes with 128 processes placed on each of your allocated nodes. You need to have nodes allocated by the batch system (qsub) before starting mpirun.
qsub -koed my_batchjob_script.pbsoption here and redirecting the output to a file (see example above) makes it possible for you to view STDOUT and STDERR of your job scripts while the job is running.
Interactive Batch Mode
Interactive mode is typically used for debugging or optimizing code but not for running production code. To begin an interactive session, use the "qsub -I" command:
qsub -I -l select=2:node_type=rome:ncpus=128:mpiprocs=128 -l walltime=00:30:00
If the requested resources are available and free (in the example above: 2 rome nodes/128 cores each, 30 minutes, prepared for 128 mpi processes on each node), then you will get a new session on the jobs head node for your requested resources. Now you have to use the mpirun command to launch your parallel application to the allocated compute nodes. When you are finished, enter logout to exit the batch system and return to the normal command line.
PBS_NODEFILE (MPI usage of multi-socket nodes and multi-core cpus)
In most MPI environments, the PBS_NODEFILE will be usefull to start the correct number of mpi processes on each allocated node. The jobs ${PBS_NODEFILE} contents depends on the number of MPI processes for each requested chunk. Inside a select statement of each chunk you can define a mpiprocs option (Type: integer). The number of lines in PBS_NODEFILE is the sum of the values of mpiprocs for all chunks requested by the job. For each chunk with mpiprocs=P, the host name for that chunk is written to the PBS_NODEFILE P times.
Example:
qsub -l select=2:node_type=rome ./myscript
The batch system allocates two node of type rome. The file ${PBS_NODEFILE} contains:
node1 node2
If the chunk request has the option mpiprocs defined, then it is possible to allocate the defined PE's on a node. This option especially allow MPI to place the MPI processes of ranks on a shared node or alternatively on distributed nodes.
select example with 2 chunk requests (seperated by '+'):
qsub -l select=2:node_type=rome:mpiprocs=2+1:node_type=rome:mpiprocs=3 ./myscript
The batch system allocates 2 nodes of type rome each for 2 PE's and 1 node of type rome for 3 PE's. Then the file ${PBS_NODEFILE} contains:
node1 node1 node2 node2 node3 node3 node3
Defaults for Ressource Requests
If you don't set the resources for your job request, then you will get default resource limits for your job.
resource | value | notes |
select | 1 | |
mpiprocs | 1 | |
node_type | rome | |
ncpus | 1 | |
THPOFF | False |
Please select your resource requests carefull.
To have the same environmental settings (exported environment) of your current session in your batchjob, the qsub command needs the option argument -V. Transparent Huge Pages (THP) are enabled per default, but can be disabled with setting the THPOFF resource variable to true.
qsub -l select=1:node_type=rome:THPOFF=true
Run job on other Account ID
There are Unix groups associated to the project account ID (ACID). To run a job on a non-default project budget, the groupname of this project has to be passed in the group_list:
qsub -l select=1:node_type=rome -W group_list=<groupname>
To get your available groups:
id -Gn
Usage of a Reservation
For nodes which are reserved for special groups or users, you need to specify additional the queue which is intended for this reservation:
- E.g. a reservation of some nodes is bound to the queue named workday:
qsub -q workday -l select=1:node_type=rome -l walltime=1:00 testjob.cmd
Job Arrays
Job arrays are groups of similar jobs. Those jobs usually have slightly different parameters which depend on the current job index. This job index will be available in the $PBS_ARRAY_INDEX variable, which can be used in job scripts to calculate or generate any kind of job-specific (input)data.
Job arrays can be requested with
qsub -J <range> -r y <my_array_jobscript>
range is specified in the form X-Y[:Z] where X is the first index, Y is the upper bound on the indices and Z is the stepping factor. For example, 2-7:2 will produce indices of 2, 4, and 6. If Z is not specified, it is taken to be 1.
Examples
Examples for PBS options in job scripts
-
You can submit batch jobs using qsub. A very simple qsub script for a MPI job with PBSPro directives (#PBS ...) for the options of qsub looks like this:
#!/bin/bash # # Simple PBS batch script that reserves two exclusive rome nodes # and runs only one MPI process on each node (in total 2 MPI processes) # The walltime is 10min # #PBS -l select=2:node_type=rome:mpiprocs=1 #PBS -l walltime=00:10:00 ### go to directory where your job request was submitted cd $PBS_O_WORKDIR ### run you parallel application on the allocated nodes mpirun -np 2 ./mpitest
Warning:
- you have to specify a shell in the first line of your batch script
- you have to specify the number of nodes you need and the node type
- you have to specify the walltime the job needs
- allocated nodes will not be shared with other jobs, even though you uses the nodes only partial
-
If you want to use four MPI processes on each node this can be done like this:
#!/bin/bash # # Simple PBS batch script that reserves two exclusive rome nodes # and runs four MPI process on each node (in total 8 MPI processes) # The walltime is 10min # #PBS -l select=2:node_type=rome:mpiprocs=4 #PBS -l walltime=00:10:00 ### go to directory where your job request was submitted cd $PBS_O_WORKDIR ### run you parallel application on the allocated nodes mpirun -np 8 ./mpitest
-
If you need 2h wall time and one node you can use the following script:
#!/bin/bash # # Simple PBS batch script that runs a scalar job # on 1 rome node using 2h # #PBS -l select=1:node_type=rome,walltime=2:00:00 cd $PBS_O_WORKDIR ./my_executable
If doing so, take care to distribute the processes within the nodes in a reasonable manner (cf. here)! Otherwise valuable resources will be unused and performance degrades!
Examples for starting batch jobs:
-
Starting a script with all options specified inside the script file
qsub <script>
-
Starting a script using 3 nodes of node_type 'rome' and a real time of 2 hours:
qsub -l select=3:node_type=rome,walltime=2:00:00 <script>
-
Starting a script using 5 cluster nodes of node_type 'rome' using 4 processors on each node:
qsub -l select=5:node_type=rome:mpiprocs=4,walltime=2:00:00 <script>
-
Starting a script using 1 cluster node of type 'rome' with 256GB memory and additional requesting 5 other 'rome' nodes with 128GB memory and using 128 MPI processes on each of this 5 nodes; real job time is 1.5 hours:
qsub -l select=1:node_type=rome:node_type_mem=256gb+5:node_type=rome:node_type_mem=128gb:mpiprocs=128,walltime=1:30:00 <script>
-
Starting a interactive batch job using 5 rome nodes with a job real time of 300 seconds:
qsub -I -l select=5:node_type=rome,walltime=300
Note: For interactive Batch jobs, you don't need a script.sh file. If the requested resources are available, you will get an interactive shell on one of the allocated compute nodes. Which nodes are allocated can be shown with the commandcat $PBS_NODEFILE
on the batch job shell or with the PBS status command
qstat -n <jobid>
on the master node.You can log in from the frontend or any assigned node to all other assigned nodes by
export PBS_JOBID=<jobid> ssh <nodename>
<jobid> is of format (number.batchserver e.g: 123456.hawk-pbs5).
If you exit the automatically established interactive shell to the node, it will be assumed that you finished your job and all other connections to the nodes will be terminated. -
Starting a script which should run on other Account ID:
First you have to know which Account ID's (groupnames) are valid for your login:
id
Choose a valid groupname for your job (abc12345 will serve as a placeholder here):
qsub -l select=5:node_type=rome,walltime=300 -W group_list=abc12345
-
Starting a batch job on the pre-or postprocessing nodes with large memory requirements:
For those special nodes you need additional to specify the number of cores and the memory you need.
The nodes are configured for shared usage which means that several user jobs can run on the same node at the same time.
You can only request 1 node in you batchjob and you need to specify the queue "smp".
qsub -q smp -l select=1:ncpus=4:mem=512gb,walltime=300
This allocates 4 cores on 1 of the special nodes for pre- and postprocessing and reserve the amount of memory to 512gb on that node for your job.
Job accounting is done by the requested cores and the requested memory only for that pre- and postprocessing nodes operated in shared mode.
Practical Notes / Examples
Core order
On Rome-based nodes, the core id corresponds to hyperthreads and sockets as follows:
core 0 - core 63: hyperthread 0 @ socket 0
core 64 - core 127: hyperthread 0 @ socket 1
core 128 - core 191: hyperthread 1 @ socket 0
core 192 - core 256: hyperthread 1 @ socket 1
Hence, cores 128 to 256 are using the same physical resources as cores 0 to 127! Only use them if you understand the concept of hyperthreads and actually like to use them! If you do not like to use them, start a maximum of 128 threads per node only!
Pinning
We recommend to always (in hybrid as well as pure MPI jobs) use omplace to pin processes and threads to CPU cores (cf. below) in order to prevent expensive migration. If usage of omplace is not possible (e.g. if using a profiling wrapper), setting MPI_DSM_CPULIST might be an alternative.
Shall I use all the available cores?
Due to limited memory bandwidth, it might be beneficial not to use all the available cores in a node. Unfortunately, you have to figure out your sweet spot by means of trial & error. While doing this, please have in mind the internal structure of the processor (cf. Processor) and try to uniformly distribute processes over architectural building blocks (i.e. CCXs, CCDs, NUMA nodes and sockets). In order to make things more easy, please use the block and stride features of omplace (cf. manpage and see an example here) or use the scripts provided below to generate lists of core IDs to be passed to omplace via the -c flag if your intended placement is not possible by means of blocks & strides.
#!/usr/bin/env python3 ######################################################################### # Usage: ./distribute_by_fraction.py <numerator> <denominator> # # Example: ./distribute_by_fraction.py 32 128 # # # # The script will then generate a list of <numerator>/<denominator>*128 # # cores to be used, equally distributed among the available 128 cores. # ######################################################################### import sys numerator = int(sys.argv[1]) denominator = int(sys.argv[2]) core_list = "" for offset in range(0, 127, denominator): for j in range(numerator): index = int(round(float(denominator)/float(numerator) * j)) core_list = core_list + str(offset + index) + "," print(core_list[:-1])
#!/usr/bin/env python3 ######################################################################### # Example usage: ./distribute_by_pattern.py 1 0 0 0 # # # # This will generate a list with core 0 being used, cores 1-3 not being # # used and so on (i.e. pattern will be repeated until status of all 128 # # cores is defined). # ######################################################################### import sys core_list = "" for i in range(128): if sys.argv[i%(len(sys.argv) - 1) + 1] == "1": core_list = core_list + str(i) + "," print(core_list[:-1])
If you do not use all the available cores, take care to specify the correct number of MPI processes per node with #PBS -l! Otherwise, the nodes at the beginning of your allocation will be entirely filled while those at the end remain empty! The latter would waste valuable resources and significantly degrade performance!
MPI Batchjobs
pure MPI job using HPE MPI
Here is a simple pbs job script:
#!/bin/bash #PBS -N Hi_Thomas #PBS -l select=16:node_type=rome:mpiprocs=128 #PBS -l walltime=00:20:00 mpirun -np 2048 ./hi.hpe
To submit the job script execute
hybrid MPI/OpenMP job using HPE MPI
To run a MPI application with 128 Processes and two OpenMP threads per process on two compute nodes, include the following in the pbs job script:
#!/bin/bash #PBS -N Hi_MPI_OpenMP #PBS -l select=2:node_type=rome:mpiprocs=64:ompthreads=2 #PBS -l walltime=00:20:00 export OMP_NUM_THREADS=2 mpirun -np 128 omplace -nt 2 [-vv] ./hi.mpiomp
The omplace command helps with the placement of OpenMP threads within an MPI program. In the above example, the threads in a 128-process MPI program with two threads per process are placed as follows:
- Rank 0, thread 0 on core 0 of socket 0 on compute node 0
- Rank 0, thread 1 on core 1 of socket 0 on compute node 0
- ...
- Rank 31, thread 1 on core 63 of socket 0 on compute node 0
- Rank 32, thread 0 on core 0 of socket 1 on compute node 0
- ...
- Rank 63, thread 1 on core 63 of socket 1 on compute node 0
- Rank 64, thread 1 on core 0 of socket 0 on compute node 1
- ...
- Rank 127, thread 1 on core 63 of socket 1 on compute node 1
The optional -vv parameter prints out the placement of the processes and threads to standard output.
Warning: Due to the limited scaling of the standard output, you should not use the optional parameter -vv for medium and large jobs!
hybrid MPI/OpenMP job using HPE MPI and hyperthreads
The job described before can be run on the same physical resources with twice the number of processes respectively threads by means of hyperthreads as follows:
#!/bin/bash #PBS -N Hi_MPI_OpenMP_HT #PBS -l select=2:node_type=rome:mpiprocs=128:ompthreads=2 #PBS -l walltime=00:20:00 export OMP_NUM_THREADS=2 mpirun -np 128 omplace -nt 2 [-vv] ./hi.mpiomp
Ranks will be placed as follows:
- Rank 0, thread 0 on logical core 0 of core 0 of socket 0 on compute node 0
- Rank 0, thread 1 on logical core 0 of core 1 of socket 0 on compute node 0
- ...
- Rank 31, thread 1 on logical core 0 of core 63 of socket 0 on compute node 0
- Rank 32, thread 0 on logical core 0 of core 0 of socket 1 on compute node 0
- ...
- Rank 63, thread 1 on logical core 0 of core 63 of socket 1 on compute node 0
- Rank 64, thread 0 on logical core 1 of core 0 of socket 0 on compute node 0
- ...
- Rank 127, thread 1 on logical core 1 of core 63 of socket 1 on compute node 0
- Rank 128, thread 0 on logical core 0 of core 0 of socket 0 on compute node 1
- ...
- Rank 255, thread 1 on logical core 1 of core 63 of socket 1 on compute node 1
pure HPE MPI job with stride > 1
If you need to let cores unused, do as follows in order to anyway uniformly distribute processes over cores:
#!/bin/bash #PBS -N Hi_MPI_Not_All_Cores #PBS -l select=4:node_type=rome:mpiprocs=32 #PBS -l walltime=00:20:00 mpirun -np 128 omplace -c 0-127:st=4 ./hi.mpi
This will start a total of 128 processes, 32 on each of the 4 nodes on cores [0,4,8,...,116,120,124], i.e. with a stride of 4 (which means having one process per CCX respectively L3 slice (cf. Processor)).
Make sure to provide the exact number of MPI processes that you want to run on each node with mpiprocs=<mpi processes per node>.
With e.g. mpiprocs=128 set in this example, 4 MPI processes would be stacked on every 4th core on the first node resulting in extremely bad performance!
In addition to specifying a stride, you can also specify a so called "block size" by means of bs=<block size>. Doing so will contiguously place <block size> processes/threads in every stride, e.g.
#!/bin/bash #PBS -N Hi_MPI_Stride_and_Block_size #PBS -l select=4:node_type=rome:mpiprocs=64 #PBS -l walltime=00:20:00 mpirun -np 256 omplace -c 0-127:st=4+bs=2 ./hi.mpi
will place processes on cores 0, 1, 4, 5, 8, 9, etc.
With respect to more advanced placement of processes and threads, cf.
man omplace
as well as here.
pure MPI job using OpenMPI
Here is a simple pbs job script:
#!/bin/bash #PBS -N Hi_Thomas #PBS -l select=16:node_type=rome:mpiprocs=128 #PBS -l walltime=00:20:00 module load openmpi mpirun -np 2048 --map-by core --bind-to core ./hi.hpe
more complex jobs using OpenMPI
If you like to use OpenMPI in more complex configurations, please cf. here.
Test placement
In order to test whether the used commands yield the intended placement, feel free to use this simple code.
Get the batch job status
qstat [options] batchstat [options]
For detailed informations, see man pages:
batchstat -h man qstat man pbsnodes
-
list all your own batch jobs:
qstat -a
list all batch jobs (anonymous)
batchstat
lists all batch queues with resource limit settings:
qstat -q
lists node information of one of your batch job ID:
qstat -n <JOB_ID>
lists detailed information of one of your batch job ID:
qstat -f <JOB_ID>
Displays estimated start time for your queued jobs
qstat -T <JOB_ID>
Displays status information for your jobs, job arrays, and subjobs:
qstat -t <JOB_ID>
lists information of the PBS node status:
pbsnodes -a pbsnodes -l
gives informatioin of PBS node and job status:
batchstat
ssh from login nodes to your allocated nodes of your job
You can only connect to nodes that belongs to you, that are, the allocated nodes of your jobs. To log in to this nodes via ssh from the login nodes (frontend nodes) you have to set the environment PBS_JOBID. First find your running jobid's and the nodes which belongs to the job:
qstat -rnw
Next step is setting the environment PBS_JOBID on the login node:
export PBS_JOBID=<JOB ID>
(the <JOB ID> is in form of 123456.hawk-pbs5, i.e. includes the batch server ".hawk-pbs5")
Now you are able to login form login node via ssh to the allocated nodes of your job with the corresponding jobid.
Batch Queue Policies and Limitations
Different job queues are available for efficient resource usage.
In most cases users do not need to declare a job queue with the qsub command. Jobs are sorted to the right class automatically. In the following the definition for each job queue is given. In general jobs with a Duration up to 24 hours and 4096 nodes in total can be submitted. Some special resources like nodes with very large memory, (job sharing nodes) are only available for special queues which have to be declared with the qsub command. For larger jobs or for special job requirements different restrictions are in place respectively you have to consult the project team
Each limit settings and policies could be changed in future to adjust the cluster usage for new user requirements.
At the moment following queues and policies are defined:
route (default)
If users don't declare a queue on qsub submission, then the jobs default queue will be this. The "route" queue is a routing queue with final destinations for the industrial user jobs and the standard jobs depends on users/groups and the requested resources. The destination queues of "route" for standard jobs (academic users) are:
single
This queue is available for all single node jobs.
resource | min | max | note |
walltime | 24 hours | ||
available nodes | 1 per job (384 in total) | only single node jobs | |
priority | low | ||
joblimit | 20 per user, 30 per group |
normal
This is for all regular parallel jobs using 2 nodes and more.
resource | min | max | note |
walltime | 24 hours | ||
available nodes | 64 | 1024 per job (3072 in total) | |
priority | normal | ||
joblimit | 20 per user |
small
This is for all regular parallel jobs using 2 nodes and more.
resource | min | max | note |
walltime | 24 hours | ||
available nodes | 2 | 63 per job (768 in total) | |
priority | low | ||
joblimit | 20 per user |
test
This queue is for tests and development with restricted resources needs. The jobs in this queue are expected to deliver results after very short time. It's forbidden to use this queue for production jobs. Users have to declare this queue with the qsub submission.
resource | min | max | note |
walltime | 25 minutes | ||
available nodes | 384 | ||
priority | very high | ||
joblimit | 1 per user (25 for ALL) |
interactive
This queue is also only for batch jobs in interactive batch mode which can also be for tests and development. Users can not declare this queue with the qsub submission. But all interactive batch jobs will be routed to this queue automatically.
resource | min | max | note |
walltime | 8 hours | ||
available nodes | 192 | only for job in interactive batch mode | |
priority | very high | ||
joblimit | 2 per user |
smp
This queue is for the pre- and postprocessing nodes with large memory requirements.
resource | min | max | note |
walltime | 24 hours | ||
available nodes | 5 | only 1 node for each job (its neccessary to request the ncpus and mem) | |
cores | 128 | default ncpus is 1, you need to specify the number of cores (ncpus) | |
memory | 4TB | default mem is 1GB, you need to specify the amount of memory (mem) | |
shared | the nodes might be shared by several users and several batch jobs | ||
joblimit | 20 |
Job Run Limitations
- The maximum time limit for a Job is 24hours.
- User limits:
- limited number of jobs of one user that can run at the same time
- User Group limits:
- limited number of jobs of users in the same group that can run at the same time
- Batch Queue limits of all user jobs:
- not all nodes / node types are available on each queue
- The number of jobs for each user in the different job queues are restricted. If you reach this number you can submit further jobs when prior jobs have ended.
- (If more jobs are submitted than allowed for one job queue the old ones will be placed in the dispatcher queue 'route' and will move up in the proper destination queue after jobs from this user in the corresponding queue have ended. The waiting queue for each user will take up to 10 jobs. With this it is possible to submit job ahead.)
Topology aware scheduling
Hawk deploys an Infiniband HDR based interconnect with a 9-dimensional enhanced hypercube topology. Please refer to here with respect to the latter. Infiniband HDR has a bandwidth of 200 Gbit/s and a MPI latency of ~1.3us per link. The full bandwidth of 200 Gbit/s can be used if communicating between the 16 nodes connected to the same node of the hypercube (cf. here). Within the hypercube, the higher the dimension, the less bandwidth is available. Topology aware scheduling is used to exclude major performance fluctuations and to avoid network congestions.
Hypercube dimensions
Hawk has 44 compute node racks (has been reduced to 32 racks at 2024-06-18 for the preparation of the next generation supercomputer Hunter). Each rack has 4 chassis. Each chassis has 8 trays. Each tray has 4 compute nodes. Overview command on the login nodes is available:
batchstat -N
- Dimension 0: 16 nodes, 4 trays (1..4 or 5..8) combinations
- Dimension 1: 32 nodes, 1 compute node chassis
- Dimension 2: 64 nodes, 2 chassis (1..2 or 3..4) combinations
- Dimension 3: 128 nodes, 1 compute node rack
- Dimension 4: 256 nodes, 2 racks set combinations
- Dimension 5: 512 nodes, 4 racks set combinations
- Dimension 6: 1024 nodes, 8 racks set combinations
- Dimension 7: 2048 nodes, 16 racks set combinations
Scheduling implementation
Hawk compute nodes are available in 2 partitions.
- The first partition (inside a 8-dimensional hypercube) contains 3072 nodes (hawk racks 1..4, 9..16, 17..20, 25..32). This partition is intended to run large jobs of following node number requests: 64, 128, 256, 512, 1024 nodes. Jobs are placed in an exact dimension according to the job sizes. Thus, each job has its own dedicated hypercube.
- The second partion contains 1024 nodes (hawk racks 5..8, 21..24). This partition is intended to run small job requests of size 1 .. 63, 65 .. 127 nodes. Jobs are placed in the smallest matching hypercube according to the job size (a 4 node job will be placed in hypercube of dimension 0, a 48 node job will be placed in a hypercube of dimension 2). Multiple jobs can share one hypercube at the same time.
This ensures optimal system utilization while simultaneously exploiting the network topology. It avoids network congestions and stabilizes the overall system usage.
Node failures will reduce the number of availalbe hypercubes significant, especially for dimensions 5 and higher. This leads to large jobs having less start slots available. For this case we have implemented 2 mechanisms in the scheduler that can mitigate this effect:
- Jobs requesting 512 nodes and more will be placed in a hypercube of the matching dimension + 1.
- For longer-lasting node failures in the large partition, we virtually borrow nodes from the small partition.
Impacts for usage
- Warning: Larger jobs can only be requested with defined node numbers (64, 128, 256, 512, 1024) in regular operation.
- Jobs with a node number of 1 .. 63, 65 .. 127 nodes are processed in a small partition containing 1024 nodes.
- Jobs are placed only in matching hypercubes. It will happen that enough nodes are free and available, but for some job sizes no suitable free hypercubes are available at a given time. Jobs of this size have to wait.
- Separate requests for jobs from 2048 nodes and more are processed at special times (called XXL days). Please ask if needed.
Energy efficiency / Power Management
The HLRS has been asked to use and save energy efficiently. Therefore, a concept was developed in a cooperation by HPE that performs the batch jobs maximally efficiently and at the same time reduces the energy requirement to a minimum.
Tuesday, January 30 (2024) this dynamic power management has been activated on the entire system. More information is available here
Further information
RAM disk
For some applications that perform many temporary local I/O operations, it can make sense in justified exceptional cases to use a type of RAM disk. In this case, the RAM of the computing node is occupied. From the application's point of view, the data is written to a locally designated directory, which, however, also reduces the usable RAM of the running application when used appropriately. The maximal capacity of the RAM disk is 128GB. However, depending on how much memory application and OS will require, not all of this capacity might be available. The RAM disk is available via the following path:
$TMPDIR
Slides
With respect to further details, please refer to these slides.