- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

Batch system: Difference between revisions

From HLRS Platforms
Jump to navigationJump to search
No edit summary
 
(6 intermediate revisions by 2 users not shown)
Line 21: Line 21:


A batch job starts with a few comments, giving information to the batch system
A batch job starts with a few comments, giving information to the batch system
about the nature of the job. All thjiose can be given on command line of ''qsub'' as well.
about the nature of the job. All these can be given on command line of ''qsub'' as well.


'''Please note:''' The Account code is mandantory, and the account code is ''not your loginname'',
'''Please note:''' The account code is mandatory, and the account code is ''not your loginname'',
but a code used for accounting. Each user has at least one, but can have several.
but a code used for accounting. Each user has at least one, but can have several.
The choosen account code is used for billing the job, so users having several codes can choose
The choosen account code is used for billing the job, so users having several codes can choose
how the job is billed.
how the job is billed.
A default account code may be provided in ''$HOME/.acct''.




Line 65: Line 67:
  #PBS -q dq
  #PBS -q dq
  #PBS -l cpunum_job=4            # cpus per Node
  #PBS -l cpunum_job=4            # cpus per Node
#PBS -l cpunum_prc=4            # cpus per process
  #PBS -b 1                      # number of nodes
  #PBS -b 1                      # number of nodes
  #PBS -l elapstim_req=1200      # max wallclock time
  #PBS -l elapstim_req=1200      # max wallclock time
Line 80: Line 83:
You should specify values that come close to reality, as the scheduler takes those values as input to select jobs.  
You should specify values that come close to reality, as the scheduler takes those values as input to select jobs.  
Smaller jobs can fit into holes, so realistic smaller values will increase probability that the jobs starts early.
Smaller jobs can fit into holes, so realistic smaller values will increase probability that the jobs starts early.


== Contents of Job ==
== Contents of Job ==
Line 134: Line 136:
  mpirun -host 0 -np 15 -host 1 -np 13 ''app''
  mpirun -host 0 -np 15 -host 1 -np 13 ''app''


which shows how to run 28 CPU job on two nodes. Thenodes assigned to the jobs have a relative number,
which shows how to run 28 CPU job on two nodes. The nodes assigned to the jobs have a logical node id,
starting with 0.
starting with 0. The mapping ''physical node id'' to ''logical node id'' is provided in the environment
variable ''$_MPILNODELIST''.


Redirect stdout/stderr for each MPI-process into seperate files


  mpirun -np ''X'' /usr/lib/mpi/mpisep.sh ''app''


The environment variable ''$MPISEPSELECT'' determines whether stdout and stderr are seperated or merged.


== NQSII Usage ==
== NQSII Usage ==

Latest revision as of 11:53, 7 September 2010

Batch queues

batch queue overview
class max. #CPUs/node max. #nodes max. memory max. elapsed time mode
test 4 1 16GB 5 minutes (CPU) shared
single 8 1 64GB 12 hours shared
multi 16 4 510 GB 12 hours dedicated

Note: entrypoint for all classes is dq

Job examples

The used batch system is NEC NQSII, the directives follow the POSIX standard for batchsystems, and look very much the same as PBS or Sun Gridengine directives.

A batch job starts with a few comments, giving information to the batch system about the nature of the job. All these can be given on command line of qsub as well.

Please note: The account code is mandatory, and the account code is not your loginname, but a code used for accounting. Each user has at least one, but can have several. The choosen account code is used for billing the job, so users having several codes can choose how the job is billed.

A default account code may be provided in $HOME/.acct.


Job sample large job, will be executed in '?multi' on v901-v907

#PBS -q dq
#PBS -l cpunum_job=16           # cpus per Node
#PBS -b 2                       # number of nodes, max 4 at the moment
#PBS -l elapstim_req=12:00:00   # max wallclock time
#PBS -l cputim_job=192:00:00    # max accumulated cputime per node
#PBS -l cputim_prc=11:55:00     # max accumulated cputime per node
#PBS -l memsz_job=500gb         # memory per node
#PBS -A <acctcode>              # Your Account code, see login message, without <>
#PBS -j o                       # join stdout/stderr
#PBS -T mpisx                   # Job type: mpisx for MPI
#PBS -N MyJob                   # job name
#PBS -M MyMail@mydomain         # you should always specify your emai

Job sample small job, will be executed in '?single' on v900 in shared mode, other jobs will run on same node.

#PBS -q dq
#PBS -l cpunum_job=8            # cpus per Node
#PBS -b 1                       # number of nodes
#PBS -l elapstim_req=12:00:00   # max wallclock time
#PBS -l cputim_job=192:00:00    # max accumulated cputime per node
#PBS -l cputim_prc=11:55:00     # max accumulated cputime per node
#PBS -l memsz_job=64gb          # memory per node
#PBS -A <acctcode>              # Your Account code, see login message, without <>
#PBS -j o                       # join stdout/stderr
#PBS -T mpisx                   # Job type: mpisx for MPI
#PBS -N MyJob                   # job name
#PBS -M MyMail@mydomain         # you should always specify your emai

Job sample test job, will be executed in 'test', always one job running, no matter how loaded node is.

#PBS -q dq
#PBS -l cpunum_job=4            # cpus per Node
#PBS -l cpunum_prc=4            # cpus per process
#PBS -b 1                       # number of nodes
#PBS -l elapstim_req=1200       # max wallclock time
#PBS -l cputim_job=600          # max accumulated cputime per node
#PBS -l cputim_prc=599          # max accumulated cputime per node
#PBS -l memsz_job=16gb          # memory per node
#PBS -A <acctcode>              # Your Account code, see login message, without <>
#PBS -j o                       # join stdout/stderr
#PBS -T mpisx                   # Job type: mpisx for MPI
#PBS -N MyJob                   # job name
#PBS -M MyMail@mydomain         # you should always specify your emai


Please note: The above time and memory limits are the maximum that can be specified. You should specify values that come close to reality, as the scheduler takes those values as input to select jobs. Smaller jobs can fit into holes, so realistic smaller values will increase probability that the jobs starts early.

Contents of Job

A typical job will create a workspace (see workspace mechanism), copy some data, run the application and save some data at the end.

Multithreaded job

ws=`ws_allocate myimportantdata 10`    # get a workspace for 10 days
cd $ws                                 # go there
cp $HOME/input/file.dat .              # get some data
export OMP_NUM_THREADS=8               # use 8 OMP threads
export F_PROGINF=DETAIL                # get some performance information after the run
$HOME/bin/myApp                        # run my application
cp output.dat $HOME/output

MPI job

ws=`ws_allocate myimportantdata 10`    # get a workspace for 10 days
cd $ws                                 # go there
cp $HOME/input/file.dat .              # get some data
export MPIPROGINF=DETAIL
mpirun -nn 2 -nnp 16 $HOME/bin/myApp   # run my application on 2 nodes, 16 CPUs each (32 total)
cp output.dat $HOME/output

Hybrid OpenMP and MPI job

SCR=`ws_allocate MyWorkspace 2`    
cd $SCR
export OMP_NUM_THREADS=16              # 16 threads
export MPIPROGINF=YES
export MPIMULTITASKMIX=YES
MPIEXPORT="OMP_NUM_THREADS"            # make this environment known to all nodes
export MPIEXPORT
mpirun -nn 4 -nnp 1 $HOME/bin//mycode  # run on 4 nodes using 1 process per node, but 16 threads
cp outfile $HOME/output

More about mpirun

Basic syntax for job on single node is

mpirun -np X app

Basic syntax for a job on multiple nodes is

mpirun -nn X -nnp Y app

NQSII sets the variable $_MPINNODES which is the number of nodes requested with #PBS -b N (or qsub -b N), so this can be used as argument behind mpirun -nn to avoid inconsistency.

More complex example would be

mpirun -host 0 -np 15 -host 1 -np 13 app

which shows how to run 28 CPU job on two nodes. The nodes assigned to the jobs have a logical node id, starting with 0. The mapping physical node id to logical node id is provided in the environment variable $_MPILNODELIST.

Redirect stdout/stderr for each MPI-process into seperate files

 mpirun -np X /usr/lib/mpi/mpisep.sh app

The environment variable $MPISEPSELECT determines whether stdout and stderr are seperated or merged.

NQSII Usage

To submit a job use

qsub jobfile


To monitor system usage, you can use qstat command of NQSII to see you requests, or you can use qs script on ontake/yari to see all running and pending requests.

qstat output looks like:

RequestID       ReqName  UserName Queue     Pri STT S   Memory      CPU   Elapse R H Jobs
--------------- -------- -------- -------- ---- --- - -------- -------- -------- - - ----

For detailed description of all fields, please see the qstat manpage on a1. Jobs shows the number of nodes a job requested. Please note that CPU time is cpu time of current running process within the job. If you want to see accumulated time of the whole request, use qstat -c 1.

To learn on which nodes your job is running, use qstat -J, qs.

To see all jobs in the system, please use qs on ontake or yari. (not available on SX):

STAT REQ-ID  OWNER     NAME    QUEUE  NODES    TIME       TIME ESTIMATIONS         HOSTS
---- ------ -------- -------- -------- --- ------------ -------------------------- -----

qs shows the requests on the order as they will be started by scheduler. For privacy reasons, you can not see all details of other users requests, but you can see all requests in the system, waiting or running.

The times and memory numbers show is current consumption and requested limit. The ESTIMATIONS colum gives the estimated time when the job will start, this estimations is based on a 72 hours prediction.

To delete a job, use the qdel command. Please note: qdel of NQSII does send a SIGTERM first, followed by a SIGKILL after 5 seconds. You can change the number of seconds using -g option. By using qdel -g -1, SIGKILL is sent immediatly.

Please avoid to write large stdout, please redirect stdout and stderr of you application into a file in your jobs directory. Writing large stdout requires spool space of unpredictable size, and always causes problems when trying to store back those files into users home directories.

Tip: If you want to make sure a batch request is able to clean up if it hits a time limit, specify a second limit. In addition to cputim_job you can specify a cputim_prc. Specify that limit a few minutes shorter, and the process hitting the limit (probably your simulation) will be killed first, and your batch job has some time to cleanup. Same applies to elapsed time limit.

Scheduling

The deployed scheduler (except for test queue) is using a fairshare and backfilling strategy.

  • new users having small usage in the last weeks have high priority
  • small jobs can surpass large jobs to fill gaps
  • large jobs have priority in general (but have harder time to find resources)
  • jobs are aging, long waiting jobs gain priority
  • scheduler does not intercept running jobs, all jobs in HOLD state are hold by user or administrator
  • jobs can be checkpointed and restarted, a job which is in HOLD and was running will continue after beeing released.