- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -
Storage usage policy: Difference between revisions
(40 intermediate revisions by 3 users not shown) | |||
Line 1: | Line 1: | ||
This page | This page describes how to handle data within HLRS computing environment. | ||
Please be aware, storage | Please be aware, storage resources are optimized for a particular usage. For example | ||
benefit for all | work space filesystems are tuned for bandwidth of large parallel IO operations. This | ||
requires lots of components like disks, controllers, network connections, ... and makes it expensive. To get the maximum benefit for all users, following ''extra short'' guidelines | |||
should be read. | |||
'''Important Notice!''' No Backup on any filesystem. Please copy important data into the archive. | '''Important Notice!''' No Backup on any filesystem. Please copy important data into the archive. | ||
Following filesystems are available: | Following filesystems are available: | ||
* Home Directory - this storage type is | * Home Directory - this storage type is available on all compute resources within HLRS network. Users should store e.g. profiles, script files for workflow tasks, sources for program development... But do not use this dircetory for number crunching (especially on large parallel) jobs! | ||
* The workspace - here users jobs read / write large amount of data. IO has to be optimized to get considerable performance (e.g. use of optimized IO libraries), if unsure, please feel free to contact your project supervisor. | * The workspace - here users jobs read / write large amount of data. IO has to be optimized to get considerable performance (e.g. use of optimized IO libraries), if unsure, please feel free to contact your project supervisor. | ||
* TMP directory. | * TMP directory. On almost all compute hosts, this is an "in-memory" filesystem, used for small temporary files. All data will be removed at the end of the job. Each node has its own TMP directory, not shared with other nodes, fast but small. | ||
* HPSS - mid term storage system, here users are able to keep permanent data and save a copy of important data as a user defined backup. Please be aware, this sort of storage system can't handle large number of small files well. | |||
== Usage | == Usage guidelines for workspace filesystems == | ||
* The workspace filesystems are expensive | * The workspace filesystems are expensive resources, only data which is necessary for the ongoing work should be held within this directories. It is NOT a place for permanent (mid or long term) storage. | ||
* If a project is suspended for a while, users have to free the disk space. Data could be transfered into the HPSS storage system | * If a project is suspended for a while, users have to free the disk space. Data could be transfered into the HPSS storage system | ||
* | * Storage resources are overcommitted, this means the disk quota is not equal to grant of storage space. (This is also true for compute resources) | ||
* If the | * <font color="red">If you as a user or your group exceed the quota limit, no further jobs will be executed in the batch queues!</font> In this case, you and your group will receive an email in order to let you know about this fact. ''This policy is due to the fact that also the filesystem is an expensive resource which should be used as less as possible hence!'' | ||
* Performance optimization is important, e.g. small size IO operations kill performance | * Performance optimization is important, e.g. small size IO operations kill performance. Following bandwidth is possible for well optimized applications. | ||
{|Class=wikitable | |||
|- | |||
! File System | |||
! Host | |||
! Max. Performance | |||
|- | |||
| WS11 | |||
| hawk | |||
| ~200 GB/s | |||
|- | |||
| WS3 | |||
| vulcan cluster | |||
| ? GB/s | |||
|- | |||
|} | |||
* How to use the tools for managing work space directories is explained [[Workspace_mechanism | here ]]. | |||
== Accounting of workspace == | |||
The accounting of workspaces for academic projects (currently for ws10 on hawk): | |||
''ressource usage = max ( compute ressource usage, storage usage )'' | |||
based on a calendar month. | |||
This step is necessary to motivate users to more efficiently free up disk space if the project will be inactive for an extended period. If users do not remove data for inactive projects, the project report must justify why the allocation has been used for storage usage. Please be aware large amount of data may reduce the allocation significantly. | |||
For administration reasons, a small usage of the workspace (a few TB) will not reduce the allocation. The project administrator will be informed via Email if the project’s ''resource usage'' is being impacted by storage usage. | |||
== Related links == | |||
* [[Workspace_mechanism]] | |||
* [[High_Performance_Storage_System_(HPSS)]] |
Latest revision as of 12:22, 12 December 2024
This page describes how to handle data within HLRS computing environment.
Please be aware, storage resources are optimized for a particular usage. For example work space filesystems are tuned for bandwidth of large parallel IO operations. This requires lots of components like disks, controllers, network connections, ... and makes it expensive. To get the maximum benefit for all users, following extra short guidelines should be read.
Important Notice! No Backup on any filesystem. Please copy important data into the archive.
Following filesystems are available:
- Home Directory - this storage type is available on all compute resources within HLRS network. Users should store e.g. profiles, script files for workflow tasks, sources for program development... But do not use this dircetory for number crunching (especially on large parallel) jobs!
- The workspace - here users jobs read / write large amount of data. IO has to be optimized to get considerable performance (e.g. use of optimized IO libraries), if unsure, please feel free to contact your project supervisor.
- TMP directory. On almost all compute hosts, this is an "in-memory" filesystem, used for small temporary files. All data will be removed at the end of the job. Each node has its own TMP directory, not shared with other nodes, fast but small.
- HPSS - mid term storage system, here users are able to keep permanent data and save a copy of important data as a user defined backup. Please be aware, this sort of storage system can't handle large number of small files well.
Usage guidelines for workspace filesystems
- The workspace filesystems are expensive resources, only data which is necessary for the ongoing work should be held within this directories. It is NOT a place for permanent (mid or long term) storage.
- If a project is suspended for a while, users have to free the disk space. Data could be transfered into the HPSS storage system
- Storage resources are overcommitted, this means the disk quota is not equal to grant of storage space. (This is also true for compute resources)
- If you as a user or your group exceed the quota limit, no further jobs will be executed in the batch queues! In this case, you and your group will receive an email in order to let you know about this fact. This policy is due to the fact that also the filesystem is an expensive resource which should be used as less as possible hence!
- Performance optimization is important, e.g. small size IO operations kill performance. Following bandwidth is possible for well optimized applications.
File System | Host | Max. Performance |
---|---|---|
WS11 | hawk | ~200 GB/s |
WS3 | vulcan cluster | ? GB/s |
- How to use the tools for managing work space directories is explained here .
Accounting of workspace
The accounting of workspaces for academic projects (currently for ws10 on hawk):
ressource usage = max ( compute ressource usage, storage usage )
based on a calendar month.
This step is necessary to motivate users to more efficiently free up disk space if the project will be inactive for an extended period. If users do not remove data for inactive projects, the project report must justify why the allocation has been used for storage usage. Please be aware large amount of data may reduce the allocation significantly.
For administration reasons, a small usage of the workspace (a few TB) will not reduce the allocation. The project administrator will be informed via Email if the project’s resource usage is being impacted by storage usage.