- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -
CRAY XE6 Using the Batch System
The only way to start a parallel job on the compute nodes of this system is to use the batch system. The installed batch system is based on
- the resource management system torque and
- the scheduler moab
Additional you have to know on CRAY XE6 the user applications are always launched on the compute nodes using the application launcher, aprun, which submits applications to the Application Level Placement Scheduler (ALPS) for placement and execution.
Detailed information about how to use this system and many examples can be found in Cray Application Developer's Environment User's Guide and Workload Management and Application Placement for the Cray Linux Environment.
Running Jobs
Writing a submission script is typically the most convenient way to submit your job to the batch system. You generally interact with the batch system in two ways: through options specified in job submission scripts (these are detailed below in the examples) and by using torque or moab commands on the login nodes. There are three key commands used to interact with torque:
- qsub
- qstat
- qdel
Check the man page of torque for more advanced commands and options
man pbs
The qsub command
Batch Mode
Production jobs are typically run in batch mode. Batch scripts are shell scripts containing flags and commands to be interpreted by a shell and are used to run a set of commands in sequence. To submit a job, type
qsub my_batchjob_script.pbs
This will submit your job script "my_batchjob_script.pbs" to the job-queues.
A simple MPI job submission script for the XE6 would look like:
#!/bin/bash #PBS -N job_name #PBS -A account #PBS -l mppwidth=32 #PBS -l mppnppn=16 #PBS -l walltime=00:20:00 # Change to the direcotry that the job was submitted from cd $PBS_O_WORKDIR # Launch the parallel job to the allocated compute nodes aprun -n 32 -N 4 ./my_mpi_executable arg1 arg2 > my_output_file 2>&1
This will run your executable "my_mpi_executable" in parallel with 32 MPI processes. Torque will allocate 2 nodes to your job for a maximum time of 20 minutes and place 16 processes on each node (one per core). The batch systems allocates nodes exclusively only for one job. After the walltime limit is exceeded, the batch system will terminate your job.
Important: You have to change into a subdirectory of /mnt/lustre_server (your workspace), before calling aprun.
All torque options start with a "#PBS"-string.
You can overwrite this options on the qsub command line:
qsub -N other_name -A myother_account -l mppwidth=64,walltime=01:00:00 my_batchjob_script.pbs
To have the same environmental settings (exported environment) of your current session in your batchjob, the qsub command needs the option argument -V:
qsub -V my_batchjob_script.pbs
The individual options are explained in:
man qsub
The job launcher for the XE6 parallel jobs (both MPI and OpenMP) is aprun. This needs to be started from a subdirectory of the /mnt/lustre_server (your workspace). The aprun example above will start the parallel executable "my_mpi_executable" with the arguments "arg1" and "arg2". The job will be started using 32 MPI processes with 16 processes placed on each of your allocated nodes (remember that a node consists of 16 cores in the XE6 system). You need to have nodes allocated by the batch system before (qsub). To query further options, please use
man aprun aprun -h
An example OpenMP job submission script for the XE6 nodes is shown below.
#!/bin/bash #PBS -N job_name # Request the number of cores that you need in total #PBS -l mppwidth=16 #PBS -l mppnppn=16 # Request the time you need for computation #PBS -l walltime=00:03:00 # Change to the directory that the job was submitted from cd $PBS_O_WORKDIR # Set the number of OpenMP threads per node export OMP_NUM_THREADS=16 # Launch the OpenMP job to the allocated compute node aprun -n 1 -N 1 -d $OMP_NUM_THREADS ./my_openmp_executable.x arg1 arg2 > my_output_file 2>&1
This will run your executable "my_openmp_executable" using 16 threads on one node. We set the environment variable OMP_NUM_THREADS to 16.
Interactive Mode
Interactive mode is typically used for debugging or optimizing code but not for running production code. To begin an interactive session, use the "qsub -I" command:
qsub -I -l mppwidth=32,walltime=00:30:00
If the requested resources are available and free (in the example above: 2 nodes/32 cores, 30 minutes), then you will get a new session on the login node for your requested resources. Now you have to use the aprun command to launch your application to the allocated compute nodes. When you are finished, enter logout to exit the batch system and return to the normal command line.
Notes for requesting resources
- You must use (in Interactive Mode) the "-l mppwidth=" option to specify at least one core when you start the interactive session. If you do not, your request for an interactive session will pause indefinitely.
- Remember, you use aprun within the context of a batch session and the maximum size of the job is determined by the resources you requested when you launched the batch session. You cannot use the aprun command to use more resources than you reserved using the qsub command. Once a batch session begins, you can only use fewer resources than initially requested.
- While your job is running (in Batch Mode), STDOUT and STDERR are written to a file or files in a system directory and the output is copied to your submission directory only after the job completes. Specifying the "-j oe" option here and redirecting the output to a file (see examples above) makes it possible for you to view STDOUT and STDERR while the job is running.