- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -
NEC Cluster Hardware and Architecture (vulcan): Difference between revisions
From HLRS Platforms
Jump to navigationJump to search
(Created page with "__TOC__ === Hardware === <s> * ~150 compute nodes are of type NEC HPC-144 Rb-1 Server (see [http://www.nec.com/de/prod/solutions/lx-series/index.html NEC Products]) ** dual C...") |
|||
Line 1: | Line 1: | ||
__TOC__ | __TOC__ | ||
=== Hardware === | === Hardware === | ||
*''' Pre- & Postprocessing node''' (''smp'' node) | *''' Pre- & Postprocessing node''' (''smp'' node) | ||
Line 24: | Line 9: | ||
*'''Visualisation node''' (''vis'') | *'''Visualisation node''' (''vis'') | ||
** | ** ??? nodes each with 8 cores Intel [http://ark.intel.com/de/products/39719/Intel-Xeon-Processor-W3540-8M-Cache-2_93-GHz-4_80-GTs-Intel-QPI W3540] and 24GB memory | ||
*** Nvidia Quadro FX5800 | *** Nvidia Quadro FX5800 | ||
* ''' | * '''SandyBridge compute nodes''' | ||
** | ** ??? nodes Dual Intel [[Sb|'Sandy Bridge']] [http://ark.intel.com/de/products/64595/Intel-Xeon-Processor-E5-2670-20M-Cache-2_60-GHz-8_00-GTs-Intel-QPI E5-2670] (204 for laki and 124 for laki2) | ||
*** 2.6 Ghz, 8 Cores per processor, 16 Threads | *** 2.6 Ghz, 8 Cores per processor, 16 Threads | ||
*** 4 memory channels per processor, DDR3 1600Mhz memory | *** 4 memory channels per processor, DDR3 1600Mhz memory | ||
*** | *** ??? nodes with 32GB RAM (''sb''/''mem32gb'') | ||
*** | *** ??? nodes with 64GB RAM (''mem64gb'') | ||
*** QDR Mellanox ConnectX-3 IB HCAs (40gbit) | *** QDR Mellanox ConnectX-3 IB HCAs (40gbit) | ||
* ''' | * '''Haswell 20 Cores compute nodes''' | ||
** 80 nodes Dual Intel [[hsw|'Haswell']] [http://ark.intel.com/de/products/81706/Intel-Xeon-Processor-E5-2660-v3-25M-Cache-2_60-GHz E5-2660v3] | ** 80 nodes Dual Intel [[hsw|'Haswell']] [http://ark.intel.com/de/products/81706/Intel-Xeon-Processor-E5-2660-v3-25M-Cache-2_60-GHz E5-2660v3] | ||
*** 2.6 Ghz, 10 Cores per processor, 20 Threads | *** 2.6 Ghz, 10 Cores per processor, 20 Threads | ||
Line 43: | Line 28: | ||
*** QDR Mellanox ConnectX-3 IB HCAs (40gbit) | *** QDR Mellanox ConnectX-3 IB HCAs (40gbit) | ||
* ''' | * '''Haswell 24 Cores compute nodes''' | ||
** 360 nodes Dual Intel [[hsw|'Haswell']] [http://ark.intel.com/de/products/81908/Intel-Xeon-Processor-E5-2680-v3-30M-Cache-2_50-GHz E5-2680v3] | ** 360 nodes Dual Intel [[hsw|'Haswell']] [http://ark.intel.com/de/products/81908/Intel-Xeon-Processor-E5-2680-v3-30M-Cache-2_50-GHz E5-2680v3] | ||
*** 2.5 Ghz, 12 Cores per processor, 24 Threads | *** 2.5 Ghz, 12 Cores per processor, 24 Threads | ||
Line 51: | Line 36: | ||
*** QDR Mellanox ConnectX-3 IB HCAs (40gbit), 144 of the 128GB nodes have fdr IB, (''fdr'') | *** QDR Mellanox ConnectX-3 IB HCAs (40gbit), 144 of the 128GB nodes have fdr IB, (''fdr'') | ||
* ''' | * '''10 Additional GPU graphic nodes with''' | ||
** Nvidia Tesla P100 12GB | ** Nvidia Tesla P100 12GB | ||
** 2 sockets ech 8 cores (Intel E5-2667v4 @ 3.2GHz) | ** 2 sockets ech 8 cores (Intel E5-2667v4 @ 3.2GHz) | ||
Line 68: | Line 46: | ||
* '''network''': [http://de.wikipedia.org/wiki/Infiniband InfiniBand] Double Data Rate | * '''network''': [http://de.wikipedia.org/wiki/Infiniband InfiniBand] Double Data Rate | ||
** switches for interconnect: [http://www.voltaire.com/Products/Grid_Backbone_Switches Voltaire Grid Director] [http://www.voltaire.com/Products/InfiniBand/Grid_Director_Switches/Voltaire_Grid_Director_4036 4036] with 36 QDR (40Gbps) ports (6 backbone switches) | ** switches for interconnect: [http://www.voltaire.com/Products/Grid_Backbone_Switches Voltaire Grid Director] [http://www.voltaire.com/Products/InfiniBand/Grid_Director_Switches/Voltaire_Grid_Director_4036 4036] with 36 QDR (40Gbps) ports (6 backbone switches) | ||
=== Architecture === | === Architecture === |
Revision as of 13:02, 27 September 2018
Hardware
- Pre- & Postprocessing node (smp node)
- 8x Intel Xeon X7542 6-core CPUs with 2.67GHz (8*6=48 Cores)
- 1TB RAM
- shared access
- Visualisation node (vis)
- ??? nodes each with 8 cores Intel W3540 and 24GB memory
- Nvidia Quadro FX5800
- ??? nodes each with 8 cores Intel W3540 and 24GB memory
- SandyBridge compute nodes
- ??? nodes Dual Intel 'Sandy Bridge' E5-2670 (204 for laki and 124 for laki2)
- 2.6 Ghz, 8 Cores per processor, 16 Threads
- 4 memory channels per processor, DDR3 1600Mhz memory
- ??? nodes with 32GB RAM (sb/mem32gb)
- ??? nodes with 64GB RAM (mem64gb)
- QDR Mellanox ConnectX-3 IB HCAs (40gbit)
- ??? nodes Dual Intel 'Sandy Bridge' E5-2670 (204 for laki and 124 for laki2)
- Haswell 20 Cores compute nodes
- Haswell 24 Cores compute nodes
- 10 Additional GPU graphic nodes with
- Nvidia Tesla P100 12GB
- 2 sockets ech 8 cores (Intel E5-2667v4 @ 3.2GHz)
- 256GB memory
- 3.7TB /localscratch, 400GB SSD /tmp
- network: InfiniBand Double Data Rate
- switches for interconnect: Voltaire Grid Director 4036 with 36 QDR (40Gbps) ports (6 backbone switches)
Architecture
The NEC Cluster platform (vulcan) consists of several frontend nodes for interactive access (for access details see Access) and several compute nodes of different types for execution of parallel programs. Some parts of the compute nodes comes from the old NEC Cluster laki.
Compute node types installed:
* Intel Xeon 5560 (nehalem)
- Intel Xeon E5-2670 (Sandy Bridge)
- AMD Opteron 6238 (interlagos)
- Intel E5-2680v3 and E5-2660v3
* Nvidia Tesla S1070 (consisting of C1060 devices)
- Large Memory nodes (144GB, 256GB)
- Pre-Postprocessing node with very large memory (1TB)
- Visualisation nodes with Nvidia Quadro FX5800 or Nvidia Tesla P100
- Different memory nodes (
12GB, 24GB,32GB,48GB, 64GB, 128GB, 256GB)
Features
- Operating System: Centos 7
- Batchsystem: PBSPro
- node-node interconnect: Infiniband + GigE
- Global Disk 500 TB (lustre) for vulcan + 500TB (lustre) for vulcan2
- Many Software Packages for Development