- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

NEC Cluster Hardware and Architecture (vulcan): Difference between revisions

From HLRS Platforms
Jump to navigationJump to search
Line 35: Line 35:
*** 16 nodes with 256GB RAM (''hsw256gb12c'')
*** 16 nodes with 256GB RAM (''hsw256gb12c'')
*** QDR Mellanox ConnectX-3 IB HCAs (40gbit), 144 of the 128GB nodes have fdr IB, (''fdr'')
*** QDR Mellanox ConnectX-3 IB HCAs (40gbit), 144 of the 128GB nodes have fdr IB, (''fdr'')
*'''Skylake 40 Cores compute nodes'''
** 100 nodes Dual Intel(R) Xeon(R) Gold 6138 CPU @ 2.00GHz [https://www.intel.com/content/www/us/en/products/processors/xeon/scalable/gold-processors/gold-6138.html]
*** 2.0GHz, 20 Cores per processor, 40 Threads
*** 6 memory channels, DDR4 2666 MHz memory
*** 192 GB RAM
*** EDR Mellanox ConnectX-5 IB HCAs (100gbit)


* '''10 Visualisation/GPU graphic nodes with'''
* '''10 Visualisation/GPU graphic nodes with'''

Revision as of 15:58, 27 September 2018

Hardware

  • Pre- & Postprocessing node (smp node)
    • 8x Intel Xeon X7542 6-core CPUs with 2.67GHz (8*6=48 Cores)
    • 1TB RAM
    • shared access
  • Visualisation node (vis)
    • ??? nodes each with 8 cores Intel W3540 and 24GB memory
      • Nvidia Quadro FX5800
  • SandyBridge compute nodes
    • ??? nodes Dual Intel 'Sandy Bridge' E5-2670 (204 for laki and 124 for laki2)
      • 2.6 Ghz, 8 Cores per processor, 16 Threads
      • 4 memory channels per processor, DDR3 1600Mhz memory
      • ??? nodes with 32GB RAM (sb/mem32gb)
      • ??? nodes with 64GB RAM (mem64gb)
      • QDR Mellanox ConnectX-3 IB HCAs (40gbit)
  • Haswell 20 Cores compute nodes
    • 80 nodes Dual Intel 'Haswell' E5-2660v3
      • 2.6 Ghz, 10 Cores per processor, 20 Threads
      • 4 memory channels per processor, DDR4 2133Mhz memory
      • 76 nodes with 128GB RAM (hsw128gb10c)
      • 4 nodes with 256GB RAM (hsw256gb10c)
      • QDR Mellanox ConnectX-3 IB HCAs (40gbit)
  • Haswell 24 Cores compute nodes
    • 360 nodes Dual Intel 'Haswell' E5-2680v3
      • 2.5 Ghz, 12 Cores per processor, 24 Threads
      • 4 memory channels per processor, DDR4 2133Mhz memory
      • 344 nodes with 128GB RAM (hsw128gb12c)
      • 16 nodes with 256GB RAM (hsw256gb12c)
      • QDR Mellanox ConnectX-3 IB HCAs (40gbit), 144 of the 128GB nodes have fdr IB, (fdr)
  • Skylake 40 Cores compute nodes
    • 100 nodes Dual Intel(R) Xeon(R) Gold 6138 CPU @ 2.00GHz [1]
      • 2.0GHz, 20 Cores per processor, 40 Threads
      • 6 memory channels, DDR4 2666 MHz memory
      • 192 GB RAM
      • EDR Mellanox ConnectX-5 IB HCAs (100gbit)


  • 10 Visualisation/GPU graphic nodes with
    • Nvidia Tesla P100 12GB
    • 2 sockets ech 8 cores (Intel E5-2667v4 @ 3.2GHz)
    • 256GB memory
    • 3.7TB /localscratch, 400GB SSD /tmp


Architecture

The NEC Cluster platform (vulcan) consists of several frontend nodes for interactive access (for access details see Access) and several compute nodes of different types for execution of parallel programs. Some parts of the compute nodes comes from the old NEC Cluster laki.


Compute node types installed:

  • Sandybridge, Haswell, Skylake
  • different Memory nodes (32GB, 64GB, 128GB, 256GB, 384GB)
  • Pre-Postprocessing node with very large memory (1TB)
  • Visualisation/GPU nodes with Nvidia Quadro FX5800 or Nvidia Tesla P100


Features

  • Operating System: Centos 7
  • Batchsystem: PBSPro
  • node-node interconnect: Infiniband + GigE
  • Global Disk 500 TB (lustre) for vulcan + 500TB (lustre) for vulcan2
  • Many Software Packages for Development