- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

NEC Cluster Hardware and Architecture (vulcan): Difference between revisions

From HLRS Platforms
Jump to navigationJump to search
Line 77: Line 77:
* Pre-Postprocessing node with very large memory (1.5TB)
* Pre-Postprocessing node with very large memory (1.5TB)
* Visualisation/GPU nodes with AMD Radeon Pro WX8200, Nvidia Quadro RTX4000 or Nvidia Tesla P100
* Visualisation/GPU nodes with AMD Radeon Pro WX8200, Nvidia Quadro RTX4000 or Nvidia Tesla P100
* KI nodes with Nvidia Tesla V100
* Vector nodes with NEC Aurora TSUBASA CPUs
    
    
'''Features'''
'''Features'''
Line 82: Line 84:
* Batchsystem: PBSPro
* Batchsystem: PBSPro
* node-node interconnect: Infiniband + GigE
* node-node interconnect: Infiniband + GigE
* Global Disk 500 TB (lustre) for vulcan + 500TB (lustre) for vulcan2
* Global Disk 2.2 PB (lustre) for vulcan + 500TB (lustre) for vulcan2
* Many Software Packages for Development
* Many Software Packages for Development

Revision as of 13:08, 8 September 2022

Hardware

  • Pre- & Postprocessing node (smp node)
    • 1 node
      • 2x Intel Xeon Gold 6148, 40 cores total @ 2.40GHz
      • 1.5TB memory
      • Nvidia Quadro K6000
      • 2TB HDD mounted on /tmp
      • 13TB HDD mounted on /localscratch
      • shared access
  • CascadeLake 40 cores compute nodes (clx)
    • 96 nodes (clx-25, clx384gb40c)
    • 8 nodes (clx-21, clx384gb40c-ai)
      • 2x Intel Xeon Gold 6230, 40 cores total @ 2.10GHz
      • 384GB memory
      • 1.8TB NVMe mounted at /localscratch
  • CascadeLake 36 cores compute nodes (clx-ai) for artificial intelligence and big data applications
    • 8 nodes (clx768gb36c-ai)
      • 2x Intel Xeon Gold 6240, 36 cores total @ 2.60GHz
      • 768GB memory
      • 8x Nvidia Tesla V100 SXM2 32GB
      • 7.3TB NVMe mounted at /localscratch
      • 220GB SSD mounted at /tmp
  • Haswell 20 Cores compute nodes (hsw)
    • 2x Intel Xeon E5-2660v3, 20 cores total @ 2.60GHz
    • 76 nodes (hsw128gb20c)
      • 128GB RAM
    • 4 nodes (hsw256gb20c)
      • 256GB RAM
  • Haswell 24 Cores compute nodes (hsw)
    • 2x Intel Xeon E5-2668v3, 24 cores total @ 2.50GHz
    • 152 nodes (hsw128gb24c)
      • 128GB memory
    • 16 nodes (hsw256gb24c)
      • 256GB memory
  • Skylake 40 Cores compute nodes (skl)
    • 88 (skl192gb40c)
  • Visualisation node with GPUs
    • 2x Intel Xeon Silver 4112, 8 codes total @ 2.60GHz
    • 96GB memory
    • 6 nodes (visamd)
      • AMD Radeon Pro WX8200
    • 1 node (visnv)
      • Nvidia Quadro RTX 4000
  • Visualisation/GPGPU graphic nodes (visp100)
    • 10 nodes
      • 2x Intel Xeon E5-2667v4, 16 cores total @ 3.20GHz
      • 256GB memory
      • Nvidia Tesla P100 12GB
      • 3.7TB SSD mounted at /localscratch
      • 400GB SSD mounted at /tmp


  • Interconnect: InfiniBand
    • Various generations of Infiniband switches with QDR, FDR, EDR and HDR speed

Architecture

The NEC Cluster platform (vulcan) consists of several frontend nodes for interactive access (for access details see Access) and several compute nodes of different types for execution of parallel programs. Some parts of the compute nodes comes from the old NEC Cluster laki.

Compute node types installed:

  • Haswell, Skylake, CascadeLake
  • different Memory nodes (128GB, 256GB, 384GB)
  • Pre-Postprocessing node with very large memory (1.5TB)
  • Visualisation/GPU nodes with AMD Radeon Pro WX8200, Nvidia Quadro RTX4000 or Nvidia Tesla P100
  • KI nodes with Nvidia Tesla V100
  • Vector nodes with NEC Aurora TSUBASA CPUs

Features

  • Operating System: Centos 7
  • Batchsystem: PBSPro
  • node-node interconnect: Infiniband + GigE
  • Global Disk 2.2 PB (lustre) for vulcan + 500TB (lustre) for vulcan2
  • Many Software Packages for Development