- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

NEC Cluster Disk Storage (laki + laki2): Difference between revisions

From HLRS Platforms
Jump to navigationJump to search
Line 8: Line 8:
*lustre
*lustre
<ul>
<ul>
  It's a fast distributed cluster filesystem using the infiniband network infrastructure. This filesystem is available on alle nodes and on the frontend/loging nodes. The capacity is 48TByte, the bandwith is about 400MByte/sec.
  It's a fast distributed cluster filesystem using the infiniband network infrastructure. This filesystem is available on alle nodes and on the frontend/loging nodes. The capacity is 43TByte, the bandwith is about 400MByte/sec.
</ul>
</ul>
<font color=red>You are responsible to obtain it from the system. To get access to this global scratch filesystems you have to use the </font> [[https://kb.hlrs.de/platforms/index.php/Workspace_mechanism'''workspace mechanism''']].
<font color=red>You are responsible to obtain it from the system. To get access to this global scratch filesystems you have to use the </font> [[https://kb.hlrs.de/platforms/index.php/Workspace_mechanism'''workspace mechanism''']].

Revision as of 13:41, 22 October 2009

HOME Directories

All user HOME directories for every compute node of the cluster are located on the distributed NEC GFS, shared with the NEC SX8 and NEC SX9 Cluster. The compute nodes and login node (frontend) have the HOME directories mounted via NFS. On every node of the cluster the path to your HOME is the same. The filesystem space on HOME is limited by a quota! Due to the limited network performance, the HOME filesystem is not intended for fast I/O and for large files!


SCRATCH directories

For large files and fast I/O, please use

  • lustre
    It's a fast distributed cluster filesystem using the infiniband network infrastructure. This filesystem is available on alle nodes and on the frontend/loging nodes. The capacity is 43TByte, the bandwith is about 400MByte/sec.

You are responsible to obtain it from the system. To get access to this global scratch filesystems you have to use the [workspace mechanism].

Filesystem Policy

IMPORTANT! NO BACKUP!! There is NO backup done of any user data located on HWW Cluster systems. The only protection of your data is the redundant disk subsystem. This RAID system is able to handle a failure of one component. There is NO way to recover inadvertently removed data. Users have to backup critical data on their local site!