- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

NEC Cluster Hardware and Architecture (laki + laki2): Difference between revisions

From HLRS Platforms
Jump to navigationJump to search
m (adding Green500 list)
Line 16: Line 16:
** switches for interconnect: [http://www.voltaire.com/Products/Grid_Backbone_Switches Voltaire Grid Director] [http://www.voltaire.com/Products/InfiniBand/Grid_Director_Switches/Voltaire_Grid_Director_4036 4036] with 36 QDR (40Gbps) ports (6 backbone switches)
** switches for interconnect: [http://www.voltaire.com/Products/Grid_Backbone_Switches Voltaire Grid Director] [http://www.voltaire.com/Products/InfiniBand/Grid_Director_Switches/Voltaire_Grid_Director_4036 4036] with 36 QDR (40Gbps) ports (6 backbone switches)


* [http://www.top500.org/ Top500] System [http://www.top500.org/system/9888 Baku]:
* [http://www.top500.org/ Top500] rankings for system [http://www.top500.org/system/9888 Baku]:
** [http://www.top500.org/lists/2009/06 June 2009 list] [http://www.top500.org/list/2009/06/100 (1-100)]: #77
** [http://www.top500.org/lists/2009/06 June 2009 list] [http://www.top500.org/list/2009/06/100 (1-100)]: #77
** [http://www.top500.org/lists/2009/11 November 2009 list] [http://www.top500.org/list/2009/11/100 (1-100)]: #94
** [http://www.top500.org/lists/2009/11 November 2009 list] [http://www.top500.org/list/2009/11/100 (1-100)]: #94
** [http://www.top500.org/lists/2010/06 June 2010 list] [http://www.top500.org/list/2010/06/200 (101-200)]: #110
** [http://www.top500.org/lists/2010/06 June 2010 list] [http://www.top500.org/list/2010/06/200 (101-200)]: #110
** [http://www.top500.org/lists/2010/11 November 2010 list] [http://www.top500.org/list/2010/11/200 (101-200)]: #157
** [http://www.top500.org/lists/2010/11 November 2010 list] [http://www.top500.org/list/2010/11/200 (101-200)]: #157
** [http://www.top500.org/lists/2011/06 June 2011 list] [http://www.top500.org/list/2011/06/400 (301-400)]: #304
* [http://www.green500.org/ Green500] rankings:
** [http://www.green500.org/lists/2009/06/top/list.php June 2009 list] [http://www.green500.org/lists/2009/06/top/list.php?from=1&to=100 (1-100)]: [http://www.green500.org/cert1.php?list=green201006&green500_rank=20 #20]
** [http://www.green500.org/lists/2009/11/top/list.php November 2009 list] [http://www.green500.org/lists/2009/11/top/list.php?from=1&to=100 (1-100)]: [http://www.green500.org/cert1.php?list=green200911&green500_rank=30 #30]
** [http://www.green500.org/lists/2010/06/top/list.php June 2010 list] [http://www.green500.org/lists/2010/06/top/list.php?from=1&to=100 (1-100)]: [http://www.green500.org/cert1.php?list=green201006&green500_rank=48 #48]
** [http://www.green500.org/lists/2010/11/top/list.php November 2010 list] [http://www.green500.org/lists/2010/11/top/list.php?from=1&to=100 (1-100)]: [http://www.green500.org/cert1.php?list=green201011&green500_rank=72 #72]


=== Architecture ===
=== Architecture ===

Revision as of 13:07, 20 June 2011

Hardware

  • 700 compute nodes are of type NEC HPC-144 Rb-1 Server (see NEC Products)
    • dual CPU compute nodes: 2x Intel Xeon X5560 "Gainestown" (5000 Sequence specifications)
      • 4 cores, 8 threads
      • 2.80 GHz (3.20 Ghz max. Turbo frequency)
      • 8MB L3 Cache
      • 1333 MHz Memory Interface, 6.4 GT/s QPI
      • TDP 95W, 45nm technology
      • "Nehalem" microarchitecture
    • compute node RAM: triple-channel memory
      • standard: 12 GB RAM
      • 36 nodes upgraded to 24GB, 48GB, 128GB or 144GB RAM
    • 32 compute nodes have additional Nvidia Tesla S1070 GPU's installed.

Architecture

The NEC Nehalem Cluster platform consists of several frontend nodes for interactive access (for access details see Access) and several compute nodes for execution of parallel programs.


Compute node CPU architecture types installed:

  • Intel Xeon 5560 (nehalem)
  • Nvidia Tesla S1070 (consisting of C1060 devices)


Features

  • Operating System: ScientificLinux 5.3 (internal test on Windows HPC Server 2008)
  • Batchsystem: Torque/Maui/Moab
  • node-node interconnect: Infiniband + GigE
  • Global Disk 60 TB (lustre)
  • OpenMPI
  • Compiler: Intel, GCC, Java


Short overview of installed nodes
Function Name CPU Sockets Cores Memory Disk PBS properties Interconnect
Compute Nodes n010501 - n143302 (700 nodes) Intel Xeon X5560 2.80 GHz 2 8 12 GB (default), 24 GB (8nodes), 48 GB (8nodes) - nehalem Infiniband
Compute Nodes with Tesla S1070 (32 nodes) Intel Xeon X5560 2.80 GHz + Tesla S1070 GPU 2 8 12 GB - tesla Infiniband
Login Node cl3fr1 / cl3fr2 Intel Xeon X5560 2.80 GHz 2 8 48 GB 150 GB mirror - 10GigE/Infiniband
I/O Server (2 lustre nodes) Intel Xeon E5405 2.0 GHz 8 GB 60 TB - Infiniband
Infrastructure (NTP,PBS,DNS,DHCP,FTP,NAT,NFS,Imager,...) 6 nodes - GigE