- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

CRAY XE6 Using the Batch System: Difference between revisions

From HLRS Platforms
Jump to navigationJump to search
Line 46: Line 46:
All torque options start with a "#PBS"-string. The individual options are explained in:
All torque options start with a "#PBS"-string. The individual options are explained in:
   man qsub
   man qsub
The job launcher for XE parallel jobs (both MPI and OpenMP) is aprun. This needs to be started from a subdirectory of the /mnt/lustre_server (your workspace). The '''aprun''' example above will start the parallel executable "my_mpi_executable" with the arguments "arg1" and "arg2". The job will be started using 32 MPI processes with 16 processes placed on each node (remember that a node consists of 16 cores in the XE6 system).

Revision as of 16:23, 13 December 2010

The only way to start a parallel job on the compute nodes of this system is to use the batch system. The installed batch system is based on

  • the resource management system torque and
  • the scheduler moab

Additional you have to know on CRAY XE6 the user applications are always launched on the compute nodes using the application launcher, aprun, which submits applications to the Application Level Placement Scheduler (ALPS) for placement and execution.

Detailed information about how to use this system and many examples can be found in Cray Application Developer's Environment User's Guide and Workload Management and Application Placement for the Cray Linux Environment.

Running Jobs

Writing a submission script is typically the most convenient way to submit your job to the batch system. You generally interact with the batch system in two ways: through options specified in job submission scripts (these are detailed below in the examples) and by using torque or moab commands on the login nodes. There are three key commands used to interact with torque:

  • qsub
  • qstat
  • qdel

Check the man page of torque for more advanced commands and options

 man pbs 

The qsub command

To submit a job, type

 qsub my_batchjob_script.pbs

This will submit your job script "my_batchjob_script.pbs" to the job-queues.

A simple MPI job submission script for the XE6 would look like:

#!/bin/bash --login
#PBS -N job_name
#PBS -A account
#PBS -l mppwidth=32
#PBS -l mppnppn=16
#PBS -l walltime=00:20:00             
  
# Change to the direcotry that the job was submitted from
cd $PBS_O_WORKDIR

# Launch the parallel job
aprun -n 32 -N 4 ./my_mpi_executable arg1 arg2 > my_output_file 2>&1

This will run your executable "my_mpi_executable" in parallel with 32 MPI processes. Torque will allocate 2 nodes to your job and place 16 processes on each node (one per core).

Important: You have to change into a subdirectory of /mnt/lustre_server (your workspace), before calling aprun.

All torque options start with a "#PBS"-string. The individual options are explained in:

 man qsub

The job launcher for XE parallel jobs (both MPI and OpenMP) is aprun. This needs to be started from a subdirectory of the /mnt/lustre_server (your workspace). The aprun example above will start the parallel executable "my_mpi_executable" with the arguments "arg1" and "arg2". The job will be started using 32 MPI processes with 16 processes placed on each node (remember that a node consists of 16 cores in the XE6 system).