- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

NEC Cluster Hardware and Architecture (laki + laki2): Difference between revisions

From HLRS Platforms
Jump to navigationJump to search
No edit summary
 
No edit summary
Line 1: Line 1:
coming soon
=== Hardware NEC Nehalem Cluster ===
 
 
 
 
=== Architecture ===
 
The NEC Nehalem Cluster platform consists of several '''frontend nodes''' for interactive access (for access details see [[NEC_Nehalem_Cluster_Access]]) and several compute nodes for execution of parallel programs.
 
 
'''Compute node CPU architecture types installed:'''
* Intel Xeon 5560 (nehalem)
* Nvidia Tesla S870 GPU
 
 
'''Features'''
* Operating System: ScientificLinux 5.3 on Intel based nodes
* Batchsystem: Torque/Maui/Moab
* node-node interconnect: Infiniband + GigE
* Global Disk 60 TB (lustre)
* OpenMPI
* Compiler: Intel, GCC, Java
 
 
{| border="1" cellpadding="2"
|+'''Short overview of installed nodes'''
!width="50"|Function
!width="50"|Name
!width="150"|CPU
!width="50"|Sockets
!width="50"|Memory
!width="50"|Disk
!width="50"|PBS properties
!width="80"|Interconnect
|-
|Compute Nodes||n010501 - n143302 (700 nodes) || Intel Xeon X5560 2.80 GHz || 2 || 12 GB || - || nehalem || Infiniband
|-
|Compute Nodes with Tesla S870 GPU|| (32 nodes) || Intel Xeon X5560 2.80 GHz + Tesla S870 GPU || 2 || 12 GB || - || tesla || Infiniband
|-
|Login Node||cl3fr1 / cl3fr2 || Intel Xeon X5560 2.80 GHz || 2 || 48 GB || 150 GB mirror || - || 10GigE/Infiniband
|-
|I/O Server||(2 lustre nodes) || Intel Xeon E5405 2.0 GHz || || 8 GB || 60 TB || - || Infiniband
|-
|Infrastructure (NTP,PBS,DNS,DHCP,FTP,NAT,NFS,Imager,...)|| 6 nodes || || || || || - || GigE
|}

Revision as of 09:00, 10 June 2009

Hardware NEC Nehalem Cluster

Architecture

The NEC Nehalem Cluster platform consists of several frontend nodes for interactive access (for access details see NEC_Nehalem_Cluster_Access) and several compute nodes for execution of parallel programs.


Compute node CPU architecture types installed:

  • Intel Xeon 5560 (nehalem)
  • Nvidia Tesla S870 GPU


Features

  • Operating System: ScientificLinux 5.3 on Intel based nodes
  • Batchsystem: Torque/Maui/Moab
  • node-node interconnect: Infiniband + GigE
  • Global Disk 60 TB (lustre)
  • OpenMPI
  • Compiler: Intel, GCC, Java


Short overview of installed nodes
Function Name CPU Sockets Memory Disk PBS properties Interconnect
Compute Nodes n010501 - n143302 (700 nodes) Intel Xeon X5560 2.80 GHz 2 12 GB - nehalem Infiniband
Compute Nodes with Tesla S870 GPU (32 nodes) Intel Xeon X5560 2.80 GHz + Tesla S870 GPU 2 12 GB - tesla Infiniband
Login Node cl3fr1 / cl3fr2 Intel Xeon X5560 2.80 GHz 2 48 GB 150 GB mirror - 10GigE/Infiniband
I/O Server (2 lustre nodes) Intel Xeon E5405 2.0 GHz 8 GB 60 TB - Infiniband
Infrastructure (NTP,PBS,DNS,DHCP,FTP,NAT,NFS,Imager,...) 6 nodes - GigE