- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

NEC Cluster Hardware and Architecture (laki + laki2): Difference between revisions

From HLRS Platforms
Jump to navigationJump to search
No edit summary
No edit summary
Line 1: Line 1:
=== Hardware ===
=== Hardware ===
* 700 compute nodes are of type NEC HPC-144 RC-1 Server (see [http://www.nec.com/de/prod/solutions/lx-series/index.html  NEC Products])
* 700 compute nodes are of type NEC HPC-144 RC-1 Server (see [http://www.nec.com/de/prod/solutions/lx-series/index.html  NEC Products])
** 2x [http://www.intel.com/products/processor/xeon5000/ Intel Xeon E5560] [http://de.wikipedia.org/wiki/Intel-Nehalem-Mikroarchitektur "Nehalem"], 2.80 GHz, 8MB Cache, FSB 1333 MHz ([http://www.intel.com/products/processor/xeon5000/specifications.htm 5000 Sequence specifications], 5500)  
** 2x [http://www.intel.com/products/processor/xeon5000/ Intel Xeon E5560] [http://de.wikipedia.org/wiki/Intel-Nehalem-Mikroarchitektur "Nehalem"], 2.80 GHz, 8MB Cache, FSB 1333 MHz, 4 core, 8 threads ([http://www.intel.com/products/processor/xeon5000/specifications.htm 5000 Sequence specifications], 5500)  


* 32 compute nodes of them have additional [http://www.nvidia.com/object/tesla_computing_solutions.html Nvidia Tesla C1070 GPU's] installed.
* 32 compute nodes of them have additional [http://www.nvidia.com/object/tesla_computing_solutions.html Nvidia Tesla C1070 GPU's] installed.

Revision as of 09:09, 28 July 2009

Hardware

Architecture

The NEC Nehalem Cluster platform consists of several frontend nodes for interactive access (for access details see Access) and several compute nodes for execution of parallel programs.


Compute node CPU architecture types installed:

  • Intel Xeon 5560 (nehalem)
  • Nvidia Tesla C1070 GPU


Features

  • Operating System: ScientificLinux 5.3 (internal test on Windows HPC Server 2008)
  • Batchsystem: Torque/Maui/Moab
  • node-node interconnect: Infiniband + GigE
  • Global Disk 60 TB (lustre)
  • OpenMPI
  • Compiler: Intel, GCC, Java


Short overview of installed nodes
Function Name CPU Sockets Memory Disk PBS properties Interconnect
Compute Nodes n010501 - n143302 (700 nodes) Intel Xeon X5560 2.80 GHz 2 12 GB - nehalem Infiniband
Compute Nodes with Tesla S870 GPU (32 nodes) Intel Xeon X5560 2.80 GHz + Tesla S870 GPU 2 12 GB - tesla Infiniband
Login Node cl3fr1 / cl3fr2 Intel Xeon X5560 2.80 GHz 2 48 GB 150 GB mirror - 10GigE/Infiniband
I/O Server (2 lustre nodes) Intel Xeon E5405 2.0 GHz 8 GB 60 TB - Infiniband
Infrastructure (NTP,PBS,DNS,DHCP,FTP,NAT,NFS,Imager,...) 6 nodes - GigE