- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -

Difference between revisions of "CRAY XC40 Hardware and Architecture"

From HLRS Platforms
(adding a first quick version of the next Phase (2?))
(error in header)
Line 1: Line 1:
== Installation step 2a (hornet) ==
+
== Installation step 2 (Hornet production system) ==
=== Summary Phase 1 Step 2 ===
+
 
 +
=== Summary Hornet Production system (Phase 1 Step 2) ===
  
 
{|Class=wikitable
 
{|Class=wikitable
Line 50: Line 51:
  
  
 +
== Installation Step 2a (Hornet test system) ==
  
=== Summary Phase 1 Step 2a ===
+
=== Summary Hornet Test system (Phase 1 Step 2a) ===
 
{|Class=wikitable
 
{|Class=wikitable
 
|-
 
|-

Revision as of 15:22, 1 October 2014

Installation step 2 (Hornet production system)

Summary Hornet Production system (Phase 1 Step 2)

Cray Cascade XC40 Supercomputer Step 2
Performance
  • Peak
  • HPL

3.79 Pflops
2.76 Pflops

Cray Cascade Cabinets 21
Number of Compute Nodes 3944 (dual socket)
Compute Processors
  • Total number of CPUs
  • Total number of Cores

3944*2= 7888 Intel Haswell E5-2680v3 2,5 GHz, 12 Cores, 2 HT/Core
7888*12= 94656

Compute Memory on Scalar Processors
  • Memory Type
  • Memory per Compute Node
  • Total Scalar Compute Memory

DDR4
128GB
504832GB= 505TB

Interconnect Cray Aries
User Storage
  • Lustre Workspace Capacity

5.4PB

For detailed information see XC40-Intro


Installation Step 2a (Hornet test system)

Summary Hornet Test system (Phase 1 Step 2a)

Cray Cascade XC30 Supercomputer Step 2a
Cray Cascade Cabinets 1
Number of Compute Nodes 164
Number of Compute Processors
  • Total number of Cores

328 Intel SandyBridge 2,6 GHz, 8 Cores
94656

Compute Memory on Scalar Processors
  • Memory Type
  • Memory per Compute Node
  • Total Scalar Compute Memory

DDR3 1600 MHz
64GB
10TB

I/O Nodes 14
Interconnect Cray Aries
External Login Servers 2
Pre- and Post-Processing Servers -
User Storage
  • Lustre Workspace Capacity

(330TB)

Cray Linux Environment (CLE)
  • Compute Node Linux
  • Cluster Compatibility Mode (CCM)
  • Data Virtualization Services (DVS)
Yes
PGI Compiling Suite (FORTRAN, C, C++) including Accelerator 25 user (shared with Step 1)
Cray Developer Toolkit
  • Cray Message Passing Toolkit (MPI, SHMEM, PMI, DMAPP, Global Arrays)
  • PAPI
  • GNU compiler and libraries
  • JAVA
  • Environment setup (Modules)
  • Cray Debugging Support Tools
    • Lgdb
    • STAT
    • ATP
Unlimited Users
Cray Programming Environment
  • Cray Compiling Environment (FORTRAN, C, C++)
  • Cray Performance Monitoring and Analysis
    • Cray PAT
    • Cray Apprentice2
  • Cray Math and Scientific Libraries
    • Cray Optimized BLAS
    • Cray Optimized LAPACK
    • Cray Optimized ScaLAPACK
    • IRT (Iterative Refinement Toolkit)
Unlimited Users
Alinea DDT Debugger 2048 Processes (shared with Step 1)
Lustre Parallel Filesystem Licensed on all Sockets
Intel Composer XE
  • Intel C++ Compiler XE
  • Intel Fortran Compiler XE
  • Intel Parallel Debugger Extension
  • Intel Integrated Performance Primitives
  • Intel Cilk Plus
  • Intel Parallel Building Blocks
  • Intel Threading Building Blocks
  • Intel Math Kernel Library
10 Seats

Architecture

  • System Management Workstation (SMW)
    • system administrator's console for managing a Cray system like monitoring, installing/upgrading software, controls the hardware, starting and stopping the XC30 system.
  • service nodes are classified in:
    • login nodes for users to access the system
    • boot nodes which provides the OS for all other nodes, licenses,...
    • network nodes which provides e.g. external network connections for the compute nodes
    • Cray Data Virtualization Service (DVS): is an I/O forwarding service that can parallelize the I/O transactions of an underlying POSIX-compliant file system.
    • sdb node for services like ALPS, torque, moab, slurm, cray management services,...
    • I/O nodes for e.g. lustre
    • MOM nodes for placing user jobs of the batch system in to execution
  • in future, the StorageSwitch Fabric of step2a and step1 will be connected. So, the Lustre workspace filesystems can be used on both hardware (Login servers and preprocessing servers) of step1 and step2a.

Step2a-concept.jpg