- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

NEC Cluster Hardware and Architecture (laki + laki2): Difference between revisions

From HLRS Platforms
Jump to navigationJump to search
No edit summary
 
(29 intermediate revisions by 3 users not shown)
Line 1: Line 1:
__TOC__
=== Hardware ===
=== Hardware ===
* 700 compute nodes are of type NEC HPC-144 Rb-1 Server (see [http://www.nec.com/de/prod/solutions/lx-series/index.html  NEC Products])
<s>
** dual CPU compute nodes: 2x [http://ark.intel.com/Product.aspx?id=37109&processor=X5560&spec-codes=SLBF4 Intel Xeon X5560] "Gainestown" ([http://www.intel.com/products/processor/xeon5000/ 5000 Sequence] [http://www.intel.com/products/processor/xeon5000/specifications.htm specifications])
* ~150 compute nodes are of type NEC HPC-144 Rb-1 Server (see [http://www.nec.com/de/prod/solutions/lx-series/index.html  NEC Products])
** dual CPU compute nodes: 2x [http://ark.intel.com/Product.aspx?id=37109&processor=X5560&spec-codes=SLBF4 Intel Xeon X5560] Nehalem EP "Gainestown" ([http://www.intel.com/products/processor/xeon5000/ 5000 Sequence] [http://www.intel.com/products/processor/xeon5000/specifications.htm specifications])
*** 4 cores, 8 threads
*** 4 cores, 8 threads
*** 2.80 GHz (3.20 Ghz max. Turbo frequency)
*** 2.80 GHz (3.20 Ghz max. Turbo frequency)
Line 9: Line 11:
*** [http://de.wikipedia.org/wiki/Intel-Nehalem-Mikroarchitektur "Nehalem"] microarchitecture
*** [http://de.wikipedia.org/wiki/Intel-Nehalem-Mikroarchitektur "Nehalem"] microarchitecture
** compute node RAM: triple-channel memory
** compute node RAM: triple-channel memory
*** standard: 12 GB RAM
*** standard: 12 GB RAM (''nehalem''/''mem12gb'')
*** 36 nodes upgraded to 24GB, 48GB, 128GB or 144GB RAM
*** 20 nodes upgraded to 24GB (''mem24gb''), 48GB (''mem48gb'') or 144GB (''mem144gb'') RAM
** 32 compute nodes have additional [http://www.nvidia.com/object/tesla_computing_solutions.html Nvidia Tesla S1070 GPU's] installed.
**** 2 nodes of 144GB Memory nodes have additional a 6TB local scratch disk installed
**** 1 nodes of 144GB Memory nodes have additional a 2TB local scratch disk installed
** 16 compute nodes have additional [http://www.nvidia.com/object/tesla_computing_solutions.html Nvidia Tesla S1070 GPU's] installed.
</s>


* Pre- & Postprocessing node
*''' Pre- & Postprocessing node''' (''smp'' node)
** 8x Intel Xeon [http://ark.intel.com/products/46497/Intel-Xeon-Processor-X7542-(18M-Cache-2_66-GHz-5_86-GTs-Intel-QPI) X7542] 6-core CPUs with 2.67GHz (8*6=48 Cores)
** 8x Intel Xeon [http://ark.intel.com/products/46497/Intel-Xeon-Processor-X7542-(18M-Cache-2_66-GHz-5_86-GTs-Intel-QPI) X7542] 6-core CPUs with 2.67GHz (8*6=48 Cores)
** 1TB RAM
** 1TB RAM
** shared access
** shared access


* '''Node Upgrade'''
*'''Visualisation node''' (''vis'')
** 180 nodes Dual Intel 'Sandy Bridge' E5-2670
** 5 nodes each with 8 cores Intel [http://ark.intel.com/de/products/39719/Intel-Xeon-Processor-W3540-8M-Cache-2_93-GHz-4_80-GTs-Intel-QPI W3540] and 24GB memory (4 for laki and 1 for laki2)
** 2.6 Ghz, 8 Cores per processor, 16 Threads
*** Nvidia Quadro FX5800
** 4 memory channels per processor, DDR3 1600Mhz memory
 
** 12 nodes with 64GB RAM
* '''Node Upgrades''' (2012/2013)
** 168 nodes with 32GB RAM
** 128 nodes Dual Intel [[Sb|'Sandy Bridge']] [http://ark.intel.com/de/products/64595/Intel-Xeon-Processor-E5-2670-20M-Cache-2_60-GHz-8_00-GTs-Intel-QPI E5-2670] (204 for laki and 124 for laki2)
** QDR Mellanox ConnectX-3 IB HCAs (40gbit)
*** 2.6 Ghz, 8 Cores per processor, 16 Threads
*** 4 memory channels per processor, DDR3 1600Mhz memory
*** 96 nodes with 32GB RAM (''sb''/''mem32gb'')
*** 30 nodes with 64GB RAM (''mem64gb'')
*** QDR Mellanox ConnectX-3 IB HCAs (40gbit)
 
* '''Node Upgrades''' (2014/2015)
** 80 nodes Dual Intel [[hsw|'Haswell']] [http://ark.intel.com/de/products/81706/Intel-Xeon-Processor-E5-2660-v3-25M-Cache-2_60-GHz E5-2660v3]
*** 2.6 Ghz, 10 Cores per processor, 20 Threads
*** 4 memory channels per processor, DDR4 2133Mhz memory
*** 76 nodes with 128GB RAM (''hsw128gb10c'')
*** 4 nodes with 256GB RAM (''hsw256gb10c'')
*** QDR Mellanox ConnectX-3 IB HCAs (40gbit)
 
* '''Node Upgrades''' (2016/17)
** 360 nodes Dual Intel [[hsw|'Haswell']] [http://ark.intel.com/de/products/81908/Intel-Xeon-Processor-E5-2680-v3-30M-Cache-2_50-GHz E5-2680v3]
*** 2.5 Ghz, 12 Cores per processor, 24 Threads
*** 4 memory channels per processor, DDR4 2133Mhz memory
*** 344 nodes with 128GB RAM (''hsw128gb12c'')
*** 16 nodes with 256GB RAM (''hsw256gb12c'')
*** QDR Mellanox ConnectX-3 IB HCAs (40gbit), 144 of the 128GB nodes have fdr IB, (''fdr'')


* '''Additional large memory nodes'''
* '''Additional [[Mem256gb|large memory nodes]]'''
** 10 nodes Quad Socket AMD Opteron 6238  
** 10 nodes Quad Socket AMD Opteron [http://products.amd.com/en-ca/search/CPU/AMD-Opteron%E2%84%A2/AMD-Opteron%E2%84%A2-6200-Series-Processor/6238/32 6238]
** 2.6 Ghz, 12 cores per processor
** 2.6 Ghz, 12 cores per processor
** 4 memory channels per processor, DDR3 1600Mhz memory
** 4 memory channels per processor, DDR3 1600Mhz memory
** 256GB RAM
** 256GB RAM (''mem256gb'')
** QDR Mellanox ConnectX-2 IB HCAs (40gbit)
** QDR Mellanox ConnectX-2 IB HCAs (40gbit)
** 4 nodes have additional a 4TB local scratch disk
* '''(Nov. 2017) 10 Additional GPU graphic nodes with'''
** Nvidia Tesla P100 12GB
** 2 sockets ech 8 cores (Intel E5-2667v4 @ 3.2GHz)
** 256GB memory
** 3.7TB /localscratch, 400GB SSD /tmp


* network: [http://de.wikipedia.org/wiki/Infiniband InfiniBand] Double Data Rate
* '''network''': [http://de.wikipedia.org/wiki/Infiniband InfiniBand] Double Data Rate
** switches for interconnect: [http://www.voltaire.com/Products/Grid_Backbone_Switches Voltaire Grid Director] [http://www.voltaire.com/Products/InfiniBand/Grid_Director_Switches/Voltaire_Grid_Director_4036 4036] with 36 QDR (40Gbps) ports (6 backbone switches)
** switches for interconnect: [http://www.voltaire.com/Products/Grid_Backbone_Switches Voltaire Grid Director] [http://www.voltaire.com/Products/InfiniBand/Grid_Director_Switches/Voltaire_Grid_Director_4036 4036] with 36 QDR (40Gbps) ports (6 backbone switches)


* [http://www.top500.org/ Top500] rankings for system [http://www.top500.org/system/9888 Baku]:
* [http://www.top500.org/ Top500] rankings for system [http://www.top500.org/system/9888 Baku]:
Line 52: Line 89:
=== Architecture ===
=== Architecture ===


The NEC Nehalem Cluster platform consists of several '''frontend nodes''' for interactive access (for access details see [[NEC_Nehalem_Cluster_access| Access]]) and several compute nodes for execution of parallel programs.  
The NEC Cluster platform (laki and laki2) consists of several '''frontend nodes''' for interactive access (for access details see [[NEC_Cluster_access_(laki_%2B_laki2)| Access]]) and several compute nodes of different types for execution of parallel programs.  




'''Compute node CPU architecture types installed:'''  
'''Compute node types installed:'''  
* Intel Xeon 5560 (nehalem)  
<s> * Intel Xeon 5560 (nehalem) </s>
* Nvidia Tesla S1070 (consisting of C1060 devices)
* Intel Xeon E5-2670 (Sandy Bridge)
* AMD Opteron 6238 (interlagos)
* Intel E5-2680v3 and  E5-2660v3
<s> * Nvidia Tesla S1070 (consisting of C1060 devices) </s>
* Large Memory nodes (144GB, 256GB)
* Pre-Postprocessing node with very large memory (1TB)
* Visualisation nodes with Nvidia Quadro FX5800 or Nvidia Tesla P100
* Different memory nodes (<s> 12GB, 24GB,</s> 32GB, <s>48GB</s>, 64GB, 128GB, 256GB)


    
    
'''Features'''
'''Features'''
* Operating System: ScientificLinux 5.3 ''(internal test on Windows HPC Server 2008)''
* Operating System: ScientificLinux 6.9 ''(internal test was done on Windows HPC Server 2008)''
* Batchsystem: Torque/Maui/Moab
* Batchsystem: Torque/Moab
* node-node interconnect: Infiniband + GigE
* node-node interconnect: Infiniband + GigE
* Global Disk 60 TB (lustre)
* Global Disk 500 TB (lustre) for laki + 500TB (lustre) for laki2
* OpenMPI
* Many Software Packages for Development
* Compiler: Intel, GCC, Java
 
 
{| border="1" cellpadding="2"
|+'''Short overview of installed nodes'''
!width="50"|Function
!width="50"|Name
!width="150"|CPU
!width="50"|Sockets
!width="50"|Cores
!width="50"|Memory
!width="50"|Disk
!width="50"|PBS properties
!width="80"|Interconnect
|-
|Compute Nodes||n010501 - n143302 (700 nodes) || Intel Xeon X5560 2.80 GHz || 2 || 8 || 12 GB (default), 24 GB (8nodes), 48 GB (8nodes)|| - || nehalem || Infiniband
|-
|Compute Nodes with Tesla S1070 || (32 nodes) || Intel Xeon X5560 2.80 GHz + Tesla S1070 GPU || 2 || 8 || 12 GB || - || tesla || Infiniband
|-
|Login Node||cl3fr1 / cl3fr2 || Intel Xeon X5560 2.80 GHz || 2 || 8 || 48 GB || 150 GB mirror || - || 10GigE/Infiniband
|-
|I/O Server||(2 lustre nodes) || Intel Xeon E5405 2.0 GHz || || || 8 GB || 60 TB || - || Infiniband
|-
|Infrastructure (NTP,PBS,DNS,DHCP,FTP,NAT,NFS,Imager,...)|| 6 nodes || || || || || || - || GigE
|}

Latest revision as of 16:22, 16 August 2018

Hardware

  • ~150 compute nodes are of type NEC HPC-144 Rb-1 Server (see NEC Products)
    • dual CPU compute nodes: 2x Intel Xeon X5560 Nehalem EP "Gainestown" (5000 Sequence specifications)
      • 4 cores, 8 threads
      • 2.80 GHz (3.20 Ghz max. Turbo frequency)
      • 8MB L3 Cache
      • 1333 MHz Memory Interface, 6.4 GT/s QPI
      • TDP 95W, 45nm technology
      • "Nehalem" microarchitecture
    • compute node RAM: triple-channel memory
      • standard: 12 GB RAM (nehalem/mem12gb)
      • 20 nodes upgraded to 24GB (mem24gb), 48GB (mem48gb) or 144GB (mem144gb) RAM
        • 2 nodes of 144GB Memory nodes have additional a 6TB local scratch disk installed
        • 1 nodes of 144GB Memory nodes have additional a 2TB local scratch disk installed
    • 16 compute nodes have additional Nvidia Tesla S1070 GPU's installed.

  • Pre- & Postprocessing node (smp node)
    • 8x Intel Xeon X7542 6-core CPUs with 2.67GHz (8*6=48 Cores)
    • 1TB RAM
    • shared access
  • Visualisation node (vis)
    • 5 nodes each with 8 cores Intel W3540 and 24GB memory (4 for laki and 1 for laki2)
      • Nvidia Quadro FX5800
  • Node Upgrades (2012/2013)
    • 128 nodes Dual Intel 'Sandy Bridge' E5-2670 (204 for laki and 124 for laki2)
      • 2.6 Ghz, 8 Cores per processor, 16 Threads
      • 4 memory channels per processor, DDR3 1600Mhz memory
      • 96 nodes with 32GB RAM (sb/mem32gb)
      • 30 nodes with 64GB RAM (mem64gb)
      • QDR Mellanox ConnectX-3 IB HCAs (40gbit)
  • Node Upgrades (2014/2015)
    • 80 nodes Dual Intel 'Haswell' E5-2660v3
      • 2.6 Ghz, 10 Cores per processor, 20 Threads
      • 4 memory channels per processor, DDR4 2133Mhz memory
      • 76 nodes with 128GB RAM (hsw128gb10c)
      • 4 nodes with 256GB RAM (hsw256gb10c)
      • QDR Mellanox ConnectX-3 IB HCAs (40gbit)
  • Node Upgrades (2016/17)
    • 360 nodes Dual Intel 'Haswell' E5-2680v3
      • 2.5 Ghz, 12 Cores per processor, 24 Threads
      • 4 memory channels per processor, DDR4 2133Mhz memory
      • 344 nodes with 128GB RAM (hsw128gb12c)
      • 16 nodes with 256GB RAM (hsw256gb12c)
      • QDR Mellanox ConnectX-3 IB HCAs (40gbit), 144 of the 128GB nodes have fdr IB, (fdr)
  • Additional large memory nodes
    • 10 nodes Quad Socket AMD Opteron 6238
    • 2.6 Ghz, 12 cores per processor
    • 4 memory channels per processor, DDR3 1600Mhz memory
    • 256GB RAM (mem256gb)
    • QDR Mellanox ConnectX-2 IB HCAs (40gbit)
    • 4 nodes have additional a 4TB local scratch disk
  • (Nov. 2017) 10 Additional GPU graphic nodes with
    • Nvidia Tesla P100 12GB
    • 2 sockets ech 8 cores (Intel E5-2667v4 @ 3.2GHz)
    • 256GB memory
    • 3.7TB /localscratch, 400GB SSD /tmp




Architecture

The NEC Cluster platform (laki and laki2) consists of several frontend nodes for interactive access (for access details see Access) and several compute nodes of different types for execution of parallel programs.


Compute node types installed: * Intel Xeon 5560 (nehalem)

  • Intel Xeon E5-2670 (Sandy Bridge)
  • AMD Opteron 6238 (interlagos)
  • Intel E5-2680v3 and E5-2660v3

* Nvidia Tesla S1070 (consisting of C1060 devices)

  • Large Memory nodes (144GB, 256GB)
  • Pre-Postprocessing node with very large memory (1TB)
  • Visualisation nodes with Nvidia Quadro FX5800 or Nvidia Tesla P100
  • Different memory nodes ( 12GB, 24GB, 32GB, 48GB, 64GB, 128GB, 256GB)


Features

  • Operating System: ScientificLinux 6.9 (internal test was done on Windows HPC Server 2008)
  • Batchsystem: Torque/Moab
  • node-node interconnect: Infiniband + GigE
  • Global Disk 500 TB (lustre) for laki + 500TB (lustre) for laki2
  • Many Software Packages for Development