- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

NEC Cluster Hardware and Architecture (laki + laki2): Difference between revisions

From HLRS Platforms
Jump to navigationJump to search
Line 1: Line 1:
=== Hardware ===
=== Hardware ===
* 700 compute nodes are of type NEC HPC-144 Rb-1 Server (see [http://www.nec.com/de/prod/solutions/lx-series/index.html  NEC Products])
* 700 compute nodes are of type NEC HPC-144 Rb-1 Server (see [http://www.nec.com/de/prod/solutions/lx-series/index.html  NEC Products])
** 2x [http://www.intel.com/products/processor/xeon5000/ Intel Xeon X5560] [http://de.wikipedia.org/wiki/Intel-Nehalem-Mikroarchitektur "Nehalem"], 2.80 GHz, 8MB Cache, FSB 1333 MHz, 4 core, 8 threads ([http://www.intel.com/products/processor/xeon5000/specifications.htm 5000 Sequence specifications], 5500)  
** 2x [http://www.intel.com/products/processor/xeon5000/ Intel Xeon X5560] "Gainestown", 2.80 GHz (3.20 Ghz max. Turbo frequency), 4 cores, 8 threads, 8MB L3 Cache, 1333 MHz Memory Interface, 6.4 GT/s QPI, TDP 95W ([http://www.intel.com/products/processor/xeon5000/specifications.htm 5000 Sequence specifications], 5500) with [http://de.wikipedia.org/wiki/Intel-Nehalem-Mikroarchitektur "Nehalem"] microarchitecture


* 32 compute nodes of them have additional [http://www.nvidia.com/object/tesla_computing_solutions.html Nvidia Tesla S1070 GPU's] installed.
* 32 compute nodes of them have additional [http://www.nvidia.com/object/tesla_computing_solutions.html Nvidia Tesla S1070 GPU's] installed.

Revision as of 14:12, 19 October 2010

Hardware

Architecture

The NEC Nehalem Cluster platform consists of several frontend nodes for interactive access (for access details see Access) and several compute nodes for execution of parallel programs.


Compute node CPU architecture types installed:

  • Intel Xeon 5560 (nehalem)
  • Nvidia Tesla S1070 (consisting of C1060 devices)


Features

  • Operating System: ScientificLinux 5.3 (internal test on Windows HPC Server 2008)
  • Batchsystem: Torque/Maui/Moab
  • node-node interconnect: Infiniband + GigE
  • Global Disk 60 TB (lustre)
  • OpenMPI
  • Compiler: Intel, GCC, Java


Short overview of installed nodes
Function Name CPU Sockets Cores Memory Disk PBS properties Interconnect
Compute Nodes n010501 - n143302 (700 nodes) Intel Xeon X5560 2.80 GHz 2 8 12 GB (default), 24 GB (8nodes), 48 GB (8nodes) - nehalem Infiniband
Compute Nodes with Tesla S1070 (32 nodes) Intel Xeon X5560 2.80 GHz + Tesla S1070 GPU 2 8 12 GB - tesla Infiniband
Login Node cl3fr1 / cl3fr2 Intel Xeon X5560 2.80 GHz 2 8 48 GB 150 GB mirror - 10GigE/Infiniband
I/O Server (2 lustre nodes) Intel Xeon E5405 2.0 GHz 8 GB 60 TB - Infiniband
Infrastructure (NTP,PBS,DNS,DHCP,FTP,NAT,NFS,Imager,...) 6 nodes - GigE