- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -

Batch System PBSPro (Hawk)

From HLRS Platforms
Revision as of 15:59, 16 February 2020 by Hpcbk (talk | contribs) (→‎Introduction)
Jump to navigationJump to search

The batch system on Hawk is PBSPro 19.2.X. For general usage see the PBS User Guide (19.2.3)

Warning: At the moment the setup is basic. More features, testing and productive like setup will be done in February/March 2020.


Introduction

The only way to start a job (parallel or single node) on the compute nodes of this system is to use the batch system.

Writing a submission script is typically the most convenient way to submit your job to the batch system. You generally interact with the batch system in two ways: through options specified in job submission scripts (these are detailed below in the examples) and by using PBSPro commands on the login nodes. There are three key commands used to interact with PBSPro:

  • qsub
  • qstat
  • qdel

Check the man page of PBSPro for more advanced commands and options

 man pbs_professional


Requesting Resources using batch system

Resources are allocated to jobs both by explicitly requesting them and by applying specified defaults.
Jobs explicitly request resources either at the host level in chunks defined in a selection statement, or in job-wide resource requests.

    Format:
  • job wide request:
       qsub ... -l <resource name>=<value> 

    The only resources that can be in a job-wide request are server-level or queue-level resources, such as walltime.

  • selection statement:
       qsub ... -l select=<chunks> 

    The only resources that can be requested in chunks are host-level resources, such as mem and ncpus. A chunk is the smallest set of resources that will be allocated to a job. It is one or more resource_name=value statements separated by a colon, e.g.:

    ncpus=2:mem=32GB
    A  selection statement is of the form:
    
      -l select=[N:]chunk[+[N:]chunk ...] 
    Note: If N is not specified, it is taken to be 1. No spaces are allowed between chunks.


Warning: all requested cluster nodes will be exclusively allocated by 1 job. The default nodes can not be shared by multiple jobs. The allocated nodes of your job will be accounted completely, even though your job uses the allocated nodes only partial


Node types

There are two types of nodes installed in the test system:

  • 16 x AMD EPYC Naples (2 x 32 cores each): select with #PBS -l node_type=naples
  • 2 x AMD EPYC Rome (2 x 64 cores each): select with #PBS -l node_type=rome
    • This node type is available on workdays between 7 to 18 only by using the queue "workday" (max. walltime 30min). On other times you need to use the default queue "route" (max. walltime on weekends are 24h).


Core order

On Rome-based nodes, the core id corresponds to hyperthreads and sockets as follows:

core 0 - core 63: hyperthread 0 @ socket 0
core 64 - core 127: hyperthread 0 @ socket 1
core 128 - core 191: hyperthread 1 @ socket 0
core 192 - core 256: hyperthread 1 @ socket 1

Hence, cores 128 to 256 are using the same physical resources as cores 0 to 127! Only use them if you understand the concept of hyperthreads and actually like to use them! If you do not like to use them, start a maximum of 128 threads per node only!



Pinning

We recommend to always (in hybrid as well as pure MPI jobs) use omplace to pin processes and threads to CPU cores (cf. below) in order to prevent expensive migration.



Shall I use all the available cores?

Due to limited memory bandwidth, it might be beneficial to not use all the available cores in a node. Unfortunately, you have to figure out your sweet spot by means of trial & error. While doing this, please have in mind the internal structure of the processor (cf. Processor) and try to uniformly distribute processes over architectural building blocks (i.e. CCXs, CCDs, NUMA nodes and sockets). In order to make things more easy, please use the block and stride features of omplace (cf. manpage) or use the scripts provided below to generate lists of core IDs to be passed to omplace via the -c flag if your intended placement is not possible by means of blocks & strides.

#!/usr/bin/python

#########################################################################
# Usage:   ./distribute_by_fraction.py <numerator> <denominator>        #
# Example: ./distribute_by_fraction.py 32 128                           #
#                                                                       #
# The script will then generate a list of <numerator>/<denominator>*128 #
# cores to be used, equally distributed among the available 128 cores.  #
#########################################################################

import sys

numerator   = int(sys.argv[1])
denominator = int(sys.argv[2])

core_list = ""
for offset in range(0, 127, denominator):
    for j in range(numerator):
        index = int(j*round(float(denominator)/float(numerator)))
        core_list = core_list + str(offset + index) + ","

sys.stdout.write(core_list[:-1] + "\n")
#!/usr/bin/python

#########################################################################
# Example usage: ./distribute_by_pattern.py 1 0 0 0                     #
#                                                                       #
# This will generate a list with core 0 being used, cores 1-3 not being #
# used and so on (i.e. pattern will be repeated until status of all 128 #
# cores is defined).                                                    #
#########################################################################

import sys

core_list = ""
for i in range(128):
    if sys.argv[i%(len(sys.argv) - 1) + 1] == "1":
        core_list = core_list + str(i) + ","

sys.stdout.write(core_list[:-1] + "\n")


Examples

See

man pbs_resources

regarding available resources (e.g. ncpus, mpiprocs, etc.) and how to specify resources in the job script.


pure MPI job using HPE MPI

Here is a simple pbs job script:

#!/bin/bash

#PBS -N Hi_Thomas
#PBS -l select=16:node_type=rome:mpiprocs=128
#PBS -l walltime=00:20:00
 
module load mpt/2.21
mpirun -np 2048 ./hi.hpe

To submit the job script execute

qsub Job.hi.hpe.pbs



pure MPI job using OpenMPI

Here is a simple pbs job script:

#!/bin/bash

#PBS -N Hi_Thomas
#PBS -l select=16:node_type=naples:mpiprocs=64
#PBS -l walltime=00:20:00
 
module load openmpi/4.0.1
mpirun -np 1024 --map-by core --bind-to core ./hi.hpe


hybrid MPI/OpenMP job using HPE MPI

To run a MPI application with 128 Processes and two OpenMP threads per process on two compute nodes, include the following in the pbs job script:

#!/bin/bash

#PBS -N Hi_MPI_OpenMP
#PBS -l select=2:node_type=rome:mpiprocs=64:ompthreads=2
#PBS -l walltime=00:20:00
 
module load mpt/2.21
export OMP_NUM_THREADS=2
mpirun -np 128 omplace -nt 2 [-vv] ./hi.mpiomp

The omplace command helps with the placement of OpenMP threads within an MPI program. In the above example, the threads in a 128-process MPI program with two threads per process are placed as follows:

  • Rank 0, thread 0 on core 0 of socket 0 on compute node 0
  • Rank 0, thread 1 on core 1 of socket 0 on compute node 0
  • ...
  • Rank 31, thread 1 on core 63 of socket 0 on compute node 0
  • Rank 32, thread 0 on core 0 of socket 1 on compute node 0
  • ...
  • Rank 63, thread 1 on core 63 of socket 1 on compute node 0
  • Rank 64, thread 1 on core 0 of socket 0 on compute node 1
  • ...
  • Rank 127, thread 1 on core 63 of socket 1 on compute node 1

The optional -vv parameter prints out the placement of the processes and threads to standard output.
Warning: Due to the limited scaling of the standard output, you should not use the optional parameter -vv for medium and large jobs!


hybrid MPI/OpenMP job using HPE MPI and hyperthreads

The job described before can be run on the same physical resources with twice the number of processes respectively threads by means of hyperthreads as follows:

#!/bin/bash

#PBS -N Hi_MPI_OpenMP_HT
#PBS -l select=2:node_type=rome:mpiprocs=128:ompthreads=2
#PBS -l walltime=00:20:00
 
module load mpt/2.21
export OMP_NUM_THREADS=2
mpirun -np 128 omplace  -nt 2 [-vv] ./hi.mpiomp

Ranks will be placed as follows:

  • Rank 0, thread 0 on logical core 0 of core 0 of socket 0 on compute node 0
  • Rank 0, thread 1 on logical core 0 of core 1 of socket 0 on compute node 0
  • ...
  • Rank 31, thread 1 on logical core 0 of core 63 of socket 0 on compute node 0
  • Rank 32, thread 0 on logical core 0 of core 0 of socket 1 on compute node 0
  • ...
  • Rank 63, thread 1 on logical core 0 of core 63 of socket 1 on compute node 0
  • Rank 64, thread 0 on logical core 1 of core 0 of socket 0 on compute node 0
  • ...
  • Rank 127, thread 1 on logical core 1 of core 63 of socket 1 on compute node 0
  • Rank 128, thread 0 on logical core 0 of core 0 of socket 0 on compute node 1
  • ...
  • Rank 255, thread 1 on logical core 1 of core 63 of socket 1 on compute node 1


pure MPI job with stride > 1

If you need to let cores unused, do as follows in order to anyway uniformly distribute processes over cores:

#!/bin/bash

#PBS -N Hi_Thomas
#PBS -l select=1:node_type=rome:mpiprocs=32
#PBS -l walltime=00:20:00
 
module load mpt/2.21
mpirun -np 32 omplace -c 0-127:st=4 ./hi.hpe

This will start processes on cores 0, 4, 8, etc., i.e. with a stride of 4 (which means having one process per CCX respectively L3 slice (cf. Processor)).

With respect to more advanced placement of processes and threads, cf.

man omplace

as well as here.


time-dependent limitations on the test system

In order to allow for fair share of the Rome node during business hours, HLRS decided to limit the walltime to a maximum of 30min in the timeframe 7:00am - 6:00pm CET from Monday to Friday. In order to get your jobs run, you have to use the queue workday via qsub -q workday or #PBS -q workday. Outside of this timeframe there are no explicit limits, but beware of the implicit 13h limit if you like your job to run over night during the working week. If you require more than 13h (usually untypical in case of a testsystem), you have to wait until the next weekend.



Further information

With respect to further details, please refer to these slides.



Manuals