- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -
NEC Cluster Hardware and Architecture (laki + laki2)
From HLRS Platforms
Jump to navigationJump to search
Hardware
- 592 compute nodes (524 for laki and 68 for laki2) are of type NEC HPC-144 Rb-1 Server (see NEC Products)
- dual CPU compute nodes: 2x Intel Xeon X5560 "Gainestown" (5000 Sequence specifications)
- 4 cores, 8 threads
- 2.80 GHz (3.20 Ghz max. Turbo frequency)
- 8MB L3 Cache
- 1333 MHz Memory Interface, 6.4 GT/s QPI
- TDP 95W, 45nm technology
- "Nehalem" microarchitecture
- compute node RAM: triple-channel memory
- standard: 12 GB RAM
- 26 nodes upgraded to 24GB, 48GB or 144GB RAM
- 20 compute nodes have additional Nvidia Tesla S1070 GPU's installed.
- dual CPU compute nodes: 2x Intel Xeon X5560 "Gainestown" (5000 Sequence specifications)
- Pre- & Postprocessing node (smp node)
- 8x Intel Xeon X7542 6-core CPUs with 2.67GHz (8*6=48 Cores)
- 1TB RAM
- shared access
- Node Upgrades (2012/2013)
- 316 nodes Dual Intel 'Sandy Bridge' E5-2670 (192 for laki and 124 for laki2)
- 2.6 Ghz, 8 Cores per processor, 16 Threads
- 4 memory channels per processor, DDR3 1600Mhz memory
- 12 nodes with 64GB RAM
- 304 nodes with 32GB RAM
- QDR Mellanox ConnectX-3 IB HCAs (40gbit)
- 316 nodes Dual Intel 'Sandy Bridge' E5-2670 (192 for laki and 124 for laki2)
- Additional large memory nodes
- 10 nodes Quad Socket AMD Opteron 6238
- 2.6 Ghz, 12 cores per processor
- 4 memory channels per processor, DDR3 1600Mhz memory
- 256GB RAM
- QDR Mellanox ConnectX-2 IB HCAs (40gbit)
- network: InfiniBand Double Data Rate
- switches for interconnect: Voltaire Grid Director 4036 with 36 QDR (40Gbps) ports (6 backbone switches)
- Top500 rankings for system Baku:
- June 2009 list (1-100): #77
- November 2009 list (1-100): #94
- June 2010 list (101-200): #110
- November 2010 list (101-200): #157
- June 2011 list (301-400): #305
- Green500 rankings:
Architecture
The NEC Nehalem Cluster platform consists of several frontend nodes for interactive access (for access details see Access) and several compute nodes for execution of parallel programs.
Compute node CPU architecture types installed:
- Intel Xeon 5560 (nehalem)
- Nvidia Tesla S1070 (consisting of C1060 devices)
Features
- Operating System: ScientificLinux 5.3 (internal test on Windows HPC Server 2008)
- Batchsystem: Torque/Maui/Moab
- node-node interconnect: Infiniband + GigE
- Global Disk 60 TB (lustre)
- OpenMPI
- Compiler: Intel, GCC, Java
Function | Name | CPU | Sockets | Cores | Memory | Disk | PBS properties | Interconnect |
---|---|---|---|---|---|---|---|---|
Compute Nodes | n010501 - n143302 (700 nodes) | Intel Xeon X5560 2.80 GHz | 2 | 8 | 12 GB (default), 24 GB (8nodes), 48 GB (8nodes) | - | nehalem | Infiniband |
Compute Nodes with Tesla S1070 | (32 nodes) | Intel Xeon X5560 2.80 GHz + Tesla S1070 GPU | 2 | 8 | 12 GB | - | tesla | Infiniband |
Login Node | cl3fr1 / cl3fr2 | Intel Xeon X5560 2.80 GHz | 2 | 8 | 48 GB | 150 GB mirror | - | 10GigE/Infiniband |
I/O Server | (2 lustre nodes) | Intel Xeon E5405 2.0 GHz | 8 GB | 60 TB | - | Infiniband | ||
Infrastructure (NTP,PBS,DNS,DHCP,FTP,NAT,NFS,Imager,...) | 6 nodes | - | GigE |