- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -
Storage (Hawk): Difference between revisions
Line 18: | Line 18: | ||
** max. workspace extensions: 3 | ** max. workspace extensions: 3 | ||
** lustre devices: 2 MDS, 4 MDT, 8 OSS, 48 OST | ** lustre devices: 2 MDS, 4 MDT, 8 OSS, 48 OST | ||
** performance: < 100 GiB/s | |||
* ws11: | * ws11: | ||
** Basepath: /lustre/hpe/ws11 | ** Basepath: /lustre/hpe/ws11 | ||
Line 25: | Line 26: | ||
** max. workspace extensions: 1 | ** max. workspace extensions: 1 | ||
** lustre devices: 2 MDS, 2 MDT, 20 OSS, 40 OST | ** lustre devices: 2 MDS, 2 MDT, 20 OSS, 40 OST | ||
** performance: ~200 GiB/s | |||
Revision as of 13:08, 29 September 2023
HOME Directories
Users' HOME directories are located on a shared RAID system and are mounted via NFS on all login (frontend) and compute nodes. The path to the HOME directories is consistent across all nodes. The filesystem space on HOME is limited by a small quota (50GB per user, 200GB per group)! The quota usage for your account and your groups can be listed using the na_quota command on the login nodes.
SCRATCH directories / workspace mechanism
For large files and fast I/O Lustre based scratch directories are available which make use of the high speed network infrastructure. Scratch directories are available on all compute and login (frontend) nodes via the WORKSPACE MECHANISM.
On hawk there are 2 different workspace filesystems available, each with different properties:
- ws10:
- Basepath: /lustre/hpe/ws10
- availabe storage capacity: 22 PB
- project quota limits: enabled for blocks and files
- max. workspace duration: 60 days
- max. workspace extensions: 3
- lustre devices: 2 MDS, 4 MDT, 8 OSS, 48 OST
- performance: < 100 GiB/s
- ws11:
- Basepath: /lustre/hpe/ws11
- available storage capacity: 15 PB
- project quota limits: disabled
- max. workspace duration: 10 days
- max. workspace extensions: 1
- lustre devices: 2 MDS, 2 MDT, 20 OSS, 40 OST
- performance: ~200 GiB/s
Each user has access to both filesystems via the WORKSPACE MECHANISM
Filesystem Policy
IMPORTANT! NO BACKUP!! There is NO backup done of any user data located on HWW Cluster systems. The only protection of your data is the redundant disk subsystem. This RAID system is able to handle a failure of one component. There is NO way to recover inadvertently removed data. Users have to backup critical data on their local site!
Long term storage
For data which should be available longer than the workspace time limit allowed and for very important data storage, please use the High Performance Storage System (HPSS)