- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -
CRAY XE6 Using the Batch System: Difference between revisions
Line 44: | Line 44: | ||
<font color red>Important</font>: You have to change into a subdirectory of /mnt/lustre_server (your [[Workspace_mechanism | workspace]]), before calling aprun. | <font color red>Important</font>: You have to change into a subdirectory of /mnt/lustre_server (your [[Workspace_mechanism | workspace]]), before calling aprun. | ||
All torque options start with a "#PBS"-string. You can overwrite this options on the qsub command line: | All torque options start with a "#PBS"-string. | ||
You can overwrite this options on the qsub command line: | |||
qsub -N other_name -A myother_account -l mppwidth=64,walltime=01:00:00 my_batchjob_script.pbs | qsub -N other_name -A myother_account -l mppwidth=64,walltime=01:00:00 my_batchjob_script.pbs | ||
To have the same environmental settings (exported environment) of your current session in your batchjob, the qsub command needs the option argument -V: | To have the same environmental settings (exported environment) of your current session in your batchjob, the qsub command needs the option argument -V: |
Revision as of 17:12, 13 December 2010
The only way to start a parallel job on the compute nodes of this system is to use the batch system. The installed batch system is based on
- the resource management system torque and
- the scheduler moab
Additional you have to know on CRAY XE6 the user applications are always launched on the compute nodes using the application launcher, aprun, which submits applications to the Application Level Placement Scheduler (ALPS) for placement and execution.
Detailed information about how to use this system and many examples can be found in Cray Application Developer's Environment User's Guide and Workload Management and Application Placement for the Cray Linux Environment.
Running Jobs
Writing a submission script is typically the most convenient way to submit your job to the batch system. You generally interact with the batch system in two ways: through options specified in job submission scripts (these are detailed below in the examples) and by using torque or moab commands on the login nodes. There are three key commands used to interact with torque:
- qsub
- qstat
- qdel
Check the man page of torque for more advanced commands and options
man pbs
The qsub command
To submit a job, type
qsub my_batchjob_script.pbs
This will submit your job script "my_batchjob_script.pbs" to the job-queues.
A simple MPI job submission script for the XE6 would look like:
#!/bin/bash #PBS -N job_name #PBS -A account #PBS -l mppwidth=32 #PBS -l mppnppn=16 #PBS -l walltime=00:20:00 # Change to the direcotry that the job was submitted from cd $PBS_O_WORKDIR # Launch the parallel job to the allocated compute nodes aprun -n 32 -N 4 ./my_mpi_executable arg1 arg2 > my_output_file 2>&1
This will run your executable "my_mpi_executable" in parallel with 32 MPI processes. Torque will allocate 2 nodes to your job for a maximum time of 20 minutes and place 16 processes on each node (one per core). The batch systems allocates nodes exclusively only for one job. After the walltime limit is exceeded, the batch system will terminate your job.
Important: You have to change into a subdirectory of /mnt/lustre_server (your workspace), before calling aprun.
All torque options start with a "#PBS"-string.
You can overwrite this options on the qsub command line:
qsub -N other_name -A myother_account -l mppwidth=64,walltime=01:00:00 my_batchjob_script.pbs
To have the same environmental settings (exported environment) of your current session in your batchjob, the qsub command needs the option argument -V:
qsub -V my_batchjob_script.pbs
The individual options are explained in:
man qsub
The job launcher for the XE6 parallel jobs (both MPI and OpenMP) is aprun. This needs to be started from a subdirectory of the /mnt/lustre_server (your workspace). The aprun example above will start the parallel executable "my_mpi_executable" with the arguments "arg1" and "arg2". The job will be started using 32 MPI processes with 16 processes placed on each of your allocated nodes (remember that a node consists of 16 cores in the XE6 system). You need to have nodes allocated by the batch system before (qsub). To query further options, please use
man aprun aprun -h
An example OpenMP job submission script for the XE6 nodes is shown below.
#!/bin/bash #PBS -N job_name # Request the number of cores that you need in total #PBS -l mppwidth=16 #PBS -l mppnppn=16 # Request the time you need for computation #PBS -l walltime=00:03:00 # Change to the directory that the job was submitted from cd $PBS_O_WORKDIR # Set the number of OpenMP threads per node export OMP_NUM_THREADS=16 # Launch the OpenMP job to the allocated compute node aprun -n 1 -N 1 -d $OMP_NUM_THREADS ./my_openmp_executable.x arg1 arg2 > my_output_file 2>&1
This will run your executable "my_openmp_executable" using 16 threads on one node. We set the environment variable OMP_NUM_THREADS to 16.