- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

CRAY XE6 Using the Batch System

From HLRS Platforms
Jump to navigationJump to search

The only way to start a parallel job on the compute nodes of this system is to use the batch system. The installed batch system is based on

  • the resource management system torque and
  • the scheduler moab

Additional you have to know on CRAY XE6 the user applications are always launched on the compute nodes using the application launcher, aprun, which submits applications to the Application Level Placement Scheduler (ALPS) for placement and execution.

Detailed information about how to use this system and many examples can be found in Cray Application Developer's Environment User's Guide and Workload Management and Application Placement for the Cray Linux Environment.

Running Jobs

Writing a submission script is typically the most convenient way to submit your job to the batch system. You generally interact with the batch system in two ways: through options specified in job submission scripts (these are detailed below in the examples) and by using torque or moab commands on the login nodes. There are three key commands used to interact with torque:

  • qsub
  • qstat
  • qdel

Check the man page of torque for more advanced commands and options

 man pbs 

The qsub command

To submit a job, type

 qsub my_batchjob_script.pbs

This will submit your job script "my_batchjob_script.pbs" to the job-queues.

A simple MPI job submission script for the XE6 would look like:

#!/bin/bash --login
#PBS -N job_name
#PBS -A account
#PBS -l mppwidth=32
#PBS -l mppnppn=4
#PBS -l walltime=00:20:00             
  
# Change to the direcotry that the job was submitted from
cd $PBS_O_WORKDIR

# Launch the parallel job
aprun -n 32 -N 4 ./my_mpi_executable arg1 arg2 > my_output_file 2>&1

This will run your executable "my_mpi_executable" in parallel with 32 MPI processes. Torque will allocate 8 nodes to your job and place 4 processes on each node.

Important: You have to change into a subdirectory of /mnt/lustre_server (your workspace), before calling aprun.

All torque options start with a "#PBS"-string. The individual options are explained in:

 man qsub