- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

NEC Cluster Hardware and Architecture (vulcan): Difference between revisions

From HLRS Platforms
Jump to navigationJump to search
(Created page with "__TOC__ === Hardware === <s> * ~150 compute nodes are of type NEC HPC-144 Rb-1 Server (see [http://www.nec.com/de/prod/solutions/lx-series/index.html NEC Products]) ** dual C...")
 
No edit summary
 
(25 intermediate revisions by 5 users not shown)
Line 1: Line 1:
__TOC__
__TOC__
=== Hardware ===
=== Hardware ===
<s>
* ~150 compute nodes are of type NEC HPC-144 Rb-1 Server (see [http://www.nec.com/de/prod/solutions/lx-series/index.html  NEC Products])
** dual CPU compute nodes: 2x [http://ark.intel.com/Product.aspx?id=37109&processor=X5560&spec-codes=SLBF4 Intel Xeon X5560] Nehalem EP "Gainestown" ([http://www.intel.com/products/processor/xeon5000/ 5000 Sequence] [http://www.intel.com/products/processor/xeon5000/specifications.htm specifications])
*** 4 cores, 8 threads
*** 2.80 GHz (3.20 Ghz max. Turbo frequency)
*** 8MB L3 Cache
*** 1333 MHz Memory Interface, 6.4 GT/s QPI
*** TDP 95W, 45nm technology
*** [http://de.wikipedia.org/wiki/Intel-Nehalem-Mikroarchitektur "Nehalem"] microarchitecture
** compute node RAM: triple-channel memory
*** standard: 12 GB RAM (''nehalem''/''mem12gb'')
*** 20 nodes upgraded to 24GB (''mem24gb''), 48GB (''mem48gb'') or 144GB (''mem144gb'') RAM
**** 2 nodes of 144GB Memory nodes have additional a 6TB local scratch disk installed
**** 1 nodes of 144GB Memory nodes have additional a 2TB local scratch disk installed
** 16 compute nodes have additional [http://www.nvidia.com/object/tesla_computing_solutions.html Nvidia Tesla S1070 GPU's] installed.
</s>


*''' Pre- & Postprocessing node''' (''smp'' node)
The list of currently available hardware can be found [https://kb.hlrs.de/platforms/index.php/Batch_System_PBSPro_(vulcan)#Node_types here].
** 8x Intel Xeon [http://ark.intel.com/products/46497/Intel-Xeon-Processor-X7542-(18M-Cache-2_66-GHz-5_86-GTs-Intel-QPI) X7542] 6-core CPUs with 2.67GHz (8*6=48 Cores)
** 1TB RAM
** shared access
 
*'''Visualisation node''' (''vis'')
** 5 nodes each with 8 cores Intel [http://ark.intel.com/de/products/39719/Intel-Xeon-Processor-W3540-8M-Cache-2_93-GHz-4_80-GTs-Intel-QPI W3540] and 24GB memory (4 for laki and 1 for laki2)
*** Nvidia Quadro FX5800
 
* '''Node Upgrades''' (2012/2013)
** 128 nodes Dual Intel [[Sb|'Sandy Bridge']] [http://ark.intel.com/de/products/64595/Intel-Xeon-Processor-E5-2670-20M-Cache-2_60-GHz-8_00-GTs-Intel-QPI E5-2670] (204 for laki and 124 for laki2)
*** 2.6 Ghz, 8 Cores per processor, 16 Threads
*** 4 memory channels per processor, DDR3 1600Mhz memory
*** 96 nodes with 32GB RAM (''sb''/''mem32gb'')
*** 30 nodes with 64GB RAM (''mem64gb'')
*** QDR Mellanox ConnectX-3 IB HCAs (40gbit)
 
* '''Node Upgrades''' (2014/2015)
** 80 nodes Dual Intel [[hsw|'Haswell']] [http://ark.intel.com/de/products/81706/Intel-Xeon-Processor-E5-2660-v3-25M-Cache-2_60-GHz E5-2660v3]
*** 2.6 Ghz, 10 Cores per processor, 20 Threads
*** 4 memory channels per processor, DDR4 2133Mhz memory
*** 76 nodes with 128GB RAM (''hsw128gb10c'')
*** 4 nodes with 256GB RAM (''hsw256gb10c'')
*** QDR Mellanox ConnectX-3 IB HCAs (40gbit)
 
* '''Node Upgrades''' (2016/17)
** 360 nodes Dual Intel [[hsw|'Haswell']] [http://ark.intel.com/de/products/81908/Intel-Xeon-Processor-E5-2680-v3-30M-Cache-2_50-GHz E5-2680v3]
*** 2.5 Ghz, 12 Cores per processor, 24 Threads
*** 4 memory channels per processor, DDR4 2133Mhz memory
*** 344 nodes with 128GB RAM (''hsw128gb12c'')
*** 16 nodes with 256GB RAM (''hsw256gb12c'')
*** QDR Mellanox ConnectX-3 IB HCAs (40gbit), 144 of the 128GB nodes have fdr IB, (''fdr'')
 
* '''Additional [[Mem256gb|large memory nodes]]'''
** 10 nodes Quad Socket AMD Opteron [http://products.amd.com/en-ca/search/CPU/AMD-Opteron%E2%84%A2/AMD-Opteron%E2%84%A2-6200-Series-Processor/6238/32 6238]
** 2.6 Ghz, 12 cores per processor
** 4 memory channels per processor, DDR3 1600Mhz memory
** 256GB RAM (''mem256gb'')
** QDR Mellanox ConnectX-2 IB HCAs (40gbit)
** 4 nodes have additional a 4TB local scratch disk
 
* '''(Nov. 2017) 10 Additional GPU graphic nodes with'''
** Nvidia Tesla P100 12GB
** 2 sockets ech 8 cores (Intel E5-2667v4 @ 3.2GHz)
** 256GB memory
** 3.7TB /localscratch, 400GB SSD /tmp
 
 
* '''network''': [http://de.wikipedia.org/wiki/Infiniband InfiniBand] Double Data Rate
** switches for interconnect: [http://www.voltaire.com/Products/Grid_Backbone_Switches Voltaire Grid Director] [http://www.voltaire.com/Products/InfiniBand/Grid_Director_Switches/Voltaire_Grid_Director_4036 4036] with 36 QDR (40Gbps) ports (6 backbone switches)
 
 
 
 
 
* [http://www.top500.org/ Top500] rankings for system [http://www.top500.org/system/9888 Baku]:
** [http://www.top500.org/lists/2009/06 June 2009 list] [http://www.top500.org/list/2009/06/100 (1-100)]: #77
** [http://www.top500.org/lists/2009/11 November 2009 list] [http://www.top500.org/list/2009/11/100 (1-100)]: #94
** [http://www.top500.org/lists/2010/06 June 2010 list] [http://www.top500.org/list/2010/06/200 (101-200)]: #110
** [http://www.top500.org/lists/2010/11 November 2010 list] [http://www.top500.org/list/2010/11/200 (101-200)]: #157
** [http://www.top500.org/lists/2011/06 June 2011 list] [http://www.top500.org/list/2011/06/400 (301-400)]: #305
 
* [http://www.green500.org/ Green500] rankings:
** [http://www.green500.org/lists/2009/06/top/list.php June 2009 list] [http://www.green500.org/lists/2009/06/top/list.php?from=1&to=100 (1-100)]: [http://www.green500.org/cert1.php?list=green201006&green500_rank=20 #20]
** [http://www.green500.org/lists/2009/11/top/list.php November 2009 list] [http://www.green500.org/lists/2009/11/top/list.php?from=1&to=100 (1-100)]: [http://www.green500.org/cert1.php?list=green200911&green500_rank=30 #30]
** [http://www.green500.org/lists/2010/06/top/list.php June 2010 list] [http://www.green500.org/lists/2010/06/top/list.php?from=1&to=100 (1-100)]: [http://www.green500.org/cert1.php?list=green201006&green500_rank=48 #48]
** [http://www.green500.org/lists/2010/11/top/list.php November 2010 list] [http://www.green500.org/lists/2010/11/top/list.php?from=1&to=100 (1-100)]: [http://www.green500.org/cert1.php?list=green201011&green500_rank=72 #72]
** [http://www.green500.org/lists/2011/06/top/list.php June 2011 list] [http://www.green500.org/lists/2011/06/top/list.php?from=1&to=100 (1-100)]: [http://www.green500.org/cert1.php?list=green201106&green500_rank=86 #86]


=== Architecture ===
=== Architecture ===


The NEC Cluster platform (vulcan) consists of several '''frontend nodes''' for interactive access (for access details see [[NEC_Cluster_access_(vulcan)| Access]]) and several compute nodes of different types for execution of parallel programs. Some parts of the compute nodes comes from the old NEC Cluster laki.  
The NEC Cluster platform (vulcan) consists of several '''frontend nodes''' for interactive access (for access details see [[NEC_Cluster_access_(vulcan)| Access]]) and several compute nodes of different types for execution of parallel programs. Some parts of the compute nodes comes from the old NEC Cluster laki.  


'''Compute node types installed:'''  
'''Compute node types installed:'''  
<s> * Intel Xeon 5560 (nehalem) </s>
* Intel Xeon Broadwell, Skylake, CascadeLake
* Intel Xeon E5-2670 (Sandy Bridge)
* AMD Epyc Rome, Genoa
* AMD Opteron 6238 (interlagos)
* different Memory sizes (256GB, 384GB, 512GB, 768GB)
* Intel E5-2680v3 and  E5-2660v3
* Pre-Postprocessing node with very large memory (1.5TB, 3TB)
<s> * Nvidia Tesla S1070 (consisting of C1060 devices) </s>
* Visualisation/GPU nodes with AMD Radeon Pro WX8200, Nvidia Quadro RTX4000 or Nvidia A30
* Large Memory nodes (144GB, 256GB)
* Pre-Postprocessing node with very large memory (1TB)
* Visualisation nodes with Nvidia Quadro FX5800 or Nvidia Tesla P100
* Different memory nodes (<s> 12GB, 24GB,</s> 32GB, <s>48GB</s>, 64GB, 128GB, 256GB)


* Vector nodes with NEC Aurora TSUBASA CPUs
    
    
'''Features'''
'''Features'''
* Operating System: Centos 7
* Operating System: Rocky Linux 8
* Batchsystem: PBSPro
* Batchsystem: PBSPro
* node-node interconnect: Infiniband + GigE
* node-node interconnect: Infiniband + 10G Ethernet
* Global Disk 500 TB (lustre) for vulcan + 500TB (lustre) for vulcan2
* Global Disk 2.2 PB (lustre) for vulcan + 500TB (lustre) for vulcan2
* Many Software Packages for Development
* Many Software Packages for Development
=== History ===
{{Warning
| text = Hardware Upgrade took place on 2024-05-24<br>
Some of the compute nodes and network infrastructure of vulcan has been replaced by up to date hardware.
}}
{| class="wikitable" border="1" cellpadding="2"
|+'''Replacement Overview:'''
|-
|'''node_type'''||'''historical node number'''||'''current node number'''
|-
|''aurora''|| 8 || 8
|-
|''clx-21''|| 8 || 8
|-
|''clx-25''        || 96 ||        96
|-
|<font color=red>''clx-ai''</font>        ||  4 ||          <font color=red>0</font>
|-
|<font color=red>''hsw128gb20c''</font>  || 84 ||          <font color=red>0</font>
|-
|<font color=red>''hsw128gb24c''</font>  || 152 ||          <font color=red>0</font>
|-
|<font color=red>''hsw256gb20c''</font>  || 4 ||          <font color=red>0</font>
|-
|<font color=red>''hsw256gb24c''</font>  || 16 ||          <font color=red>0</font>
|-
|<font color=red>''k20xm''</font>        ||  3 ||          <font color=red>0</font>
|-
|''p100''          ||  3 ||          3
|-
|''skl''          || 68 ||        72
|-
|''smp''          ||  2 ||          1
|-
|''visamd''        ||  6 ||          6
|-
|''visnv''        ||  2 ||          2
|-
|<font color=red>''visp100''</font>      || 10 ||          <font color=red>0</font>
|-
|''rome256gb32c''  ||  3 ||          3 <sup>(1)(2)</sup>
|-
|''rome512gb96c-ai'' || 10 ||        10 <sup>(1)(3)</sup>
|-
|<font color=green>''genoa''</font>          || 0 ||        <font color=green>60</font> <sup>(4)(5)</sup>
|-
|<font color=green>''genoa-a30''</font>      || 0 ||        <font color=green>24</font> <sup>(4)(6)</sup>
|-
|<font color=green>''genoa-smp''</font>      || 0 ||          <font color=green>2</font> <sup>(4)(7)</sup>
|-
|}
<sup>
(1) academic usage only<br>
(2) 2x AMD Epic 7302 Rome, 3.0GHz base, 32 cores total, 256GB DDR4, 3.5TB NVMe<br>
(3) 2x AMD Epyc 7642 Rome, 2.3GHz base, 96 cores total, 512GB DDR4, 1.8TB NVMe, 8x AMD Instinct Mi50 with 32GB<br>
(4) new nodes, node_type not yet fixed<br>
(5) 2x AMD Epyc 9334 Genoa, 2.7GHz base, 64 cores total, 768GB DDR5<br>
(6) 2x AMD Epyc 9124 Genoa, 3.0GHz base, 32 cores total, 768GB DDR5, 3.8TB NVMe, 1x Nvidia A30 with 24GB HBM2e<br>
(7) 2x AMD Epyc 9334 Genoa, 2.7GHz base, 64 cores total, 3072GB DDR5<br>
</sup>

Latest revision as of 08:41, 25 October 2024

Hardware

The list of currently available hardware can be found here.

Architecture

The NEC Cluster platform (vulcan) consists of several frontend nodes for interactive access (for access details see Access) and several compute nodes of different types for execution of parallel programs. Some parts of the compute nodes comes from the old NEC Cluster laki.

Compute node types installed:

  • Intel Xeon Broadwell, Skylake, CascadeLake
  • AMD Epyc Rome, Genoa
  • different Memory sizes (256GB, 384GB, 512GB, 768GB)
  • Pre-Postprocessing node with very large memory (1.5TB, 3TB)
  • Visualisation/GPU nodes with AMD Radeon Pro WX8200, Nvidia Quadro RTX4000 or Nvidia A30
  • Vector nodes with NEC Aurora TSUBASA CPUs

Features

  • Operating System: Rocky Linux 8
  • Batchsystem: PBSPro
  • node-node interconnect: Infiniband + 10G Ethernet
  • Global Disk 2.2 PB (lustre) for vulcan + 500TB (lustre) for vulcan2
  • Many Software Packages for Development


History

Warning: Hardware Upgrade took place on 2024-05-24
Some of the compute nodes and network infrastructure of vulcan has been replaced by up to date hardware.


Replacement Overview:
node_type historical node number current node number
aurora 8 8
clx-21 8 8
clx-25 96 96
clx-ai 4 0
hsw128gb20c 84 0
hsw128gb24c 152 0
hsw256gb20c 4 0
hsw256gb24c 16 0
k20xm 3 0
p100 3 3
skl 68 72
smp 2 1
visamd 6 6
visnv 2 2
visp100 10 0
rome256gb32c 3 3 (1)(2)
rome512gb96c-ai 10 10 (1)(3)
genoa 0 60 (4)(5)
genoa-a30 0 24 (4)(6)
genoa-smp 0 2 (4)(7)

(1) academic usage only
(2) 2x AMD Epic 7302 Rome, 3.0GHz base, 32 cores total, 256GB DDR4, 3.5TB NVMe
(3) 2x AMD Epyc 7642 Rome, 2.3GHz base, 96 cores total, 512GB DDR4, 1.8TB NVMe, 8x AMD Instinct Mi50 with 32GB
(4) new nodes, node_type not yet fixed
(5) 2x AMD Epyc 9334 Genoa, 2.7GHz base, 64 cores total, 768GB DDR5
(6) 2x AMD Epyc 9124 Genoa, 3.0GHz base, 32 cores total, 768GB DDR5, 3.8TB NVMe, 1x Nvidia A30 with 24GB HBM2e
(7) 2x AMD Epyc 9334 Genoa, 2.7GHz base, 64 cores total, 3072GB DDR5