- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -
NEC Cluster Hardware and Architecture (vulcan): Difference between revisions
From HLRS Platforms
Jump to navigationJump to search
Line 2: | Line 2: | ||
=== Hardware === | === Hardware === | ||
The list of currently available hardware can be found [https://kb.hlrs.de/platforms/index.php/Batch_System_PBSPro_(vulcan)#Node_types here]. | |||
=== Architecture === | === Architecture === |
Revision as of 09:34, 23 August 2024
Hardware
The list of currently available hardware can be found here.
Architecture
The NEC Cluster platform (vulcan) consists of several frontend nodes for interactive access (for access details see Access) and several compute nodes of different types for execution of parallel programs. Some parts of the compute nodes comes from the old NEC Cluster laki.
Compute node types installed:
- Intel Xeon Broadwell, Skylake, CascadeLake
- AMD Epyc Rome, Genoa
- different Memory sizes (256GB, 384GB, 512GB, 768GB)
- Pre-Postprocessing node with very large memory (1.5TB, 3TB)
- Visualisation/GPU nodes with AMD Radeon Pro WX8200, Nvidia Quadro RTX4000 or Nvidia A30
- Vector nodes with NEC Aurora TSUBASA CPUs
Features
- Operating System: Rocky Linux 8
- Batchsystem: PBSPro
- node-node interconnect: Infiniband + 10G Ethernet
- Global Disk 2.2 PB (lustre) for vulcan + 500TB (lustre) for vulcan2
- Many Software Packages for Development