- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

10 minutes before the first job

From HLRS Platforms
Jump to navigationJump to search

We ask all users of any server operated by HLRS

Please take 10 minutes to read this article completly!


Within this page we describe basic rules as short as possible, if you want to know more within this topic, follow the link. But again please read at least this page!


Storage Storage_usage_policy

No Backup on any filesystem. Please copy important data into the archive.

HOME: Do not run any computational (IO - intensive) job within the HOME directory. For compute jobs use the work space!

Workspace Workspace_mechanism is an expensive ressource. It is intended for active projects only. Move suspended projects into the archive. Each workspace has a lifetime, if this liftime is exceeded, all data will be deleted (automatic). It is possible to receive an email reminder. Copy important data into the archive!

Archive: do not store small file in the archive. Please check HPSS_User_Access for more information.

Data transfer to / from the workspace could be done using Data_Transfer_with_GridFTP . Using scp via frontend nodes will fail due to CPU limits

compute server

The frontend nodes have a cpu timelimit of 2h configured. Do not run compute intensive jobs on frontend nodes. The compute resources (compute nodes) are only available through the batch system. Please read the batch system documents for the corresponding platform.

Cray Hazel Hen: this system is NOT a cluster. Here we decribe two topics which caused trouble multiple times:

 * to start parallel tasks, using aprun is required (NOT mpirun!!!) if you start 

a parallel job using a wrong mechanism, this may cause trouble for all users

 * a task using large amount of memory shold be started on a compute node, use

aprun to do so