- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

CRAY XC40 Hardware and Architecture: Difference between revisions

From HLRS Platforms
Jump to navigationJump to search
Line 108: Line 108:


* compute nodes
* compute nodes
** are only available for user using the [[CRAY_XC30_Using_the_Batch_System_SLURM | batch system]] and the Application Level Placement Scheduler (ALPS), see [http://docs.cray.com/cgi-bin/craydoc.cgi?mode=Show;q=2496;f=/books/S-2496-5001/html-S-2496-5001/cnl_apps.html running applications].
** are only available for user using the [[CRAY_XE6_and_XC30_Using_the_Batch_System | batch system]] and the Application Level Placement Scheduler (ALPS), see [http://docs.cray.com/cgi-bin/craydoc.cgi?mode=Show;q=2496;f=/books/S-2496-5001/html-S-2496-5001/cnl_apps.html running applications].
*** The compute nodes are installed with 64 GB memory, each with fast interconnect (CRAY Aries).
*** The compute nodes are installed with 64 GB memory, each with fast interconnect (CRAY Aries).
*** [http://www.cray.com/Assets/PDF/products/xc/CrayXC30Networking.pdf Details about the interconnect of the Cray XC series network]
*** [http://www.cray.com/Assets/PDF/products/xc/CrayXC30Networking.pdf Details about the interconnect of the Cray XC series network]

Revision as of 10:00, 2 May 2014

Installation step 2a (hornet)

Summary Phase 1 Step 2a

Cray Cascade XC30 Supercomputer Step 2a
Cray Cascade Cabinets 1
Number of Compute Nodes 164
Number of Compute Processors 328 Intel SandyBridge 2,6 GHz, 8 Cores
Compute Memory on Scalar Processors
  • Memory Type
  • Memory per Compute Node
  • Total Scalar Compute Memory

DDR3 1600 MHz
64GB
10TB

I/O Nodes 14
Interconnect Cray Aries
External Login Servers 2
Pre- and Post-Processing Servers -
User Storage
  • Lustre Workspace Capacity

(330TB)

Cray Linux Environment (CLE)
  • Compute Node Linux
  • Cluster Compatibility Mode (CCM)
  • Data Virtualization Services (DVS)
Yes
PGI Compiling Suite (FORTRAN, C, C++) including Accelerator 25 user (shared with Step 1)
Cray Developer Toolkit
  • Cray Message Passing Toolkit (MPI, SHMEM, PMI, DMAPP, Global Arrays)
  • PAPI
  • GNU compiler and libraries
  • JAVA
  • Environment setup (Modules)
  • Cray Debugging Support Tools
    • Lgdb
    • STAT
    • ATP
Unlimited Users
Cray Programming Environment
  • Cray Compiling Environment (FORTRAN, C, C++)
  • Cray Performance Monitoring and Analysis
    • Cray PAT
    • Cray Apprentice2
  • Cray Math and Scientific Libraries
    • Cray Optimized BLAS
    • Cray Optimized LAPACK
    • Cray Optimized ScaLAPACK
    • IRT (Iterative Refinement Toolkit)
Unlimited Users
Alinea DDT Debugger 2048 Processes (shared with Step 1)
Lustre Parallel Filesystem Licensed on all Sockets
Intel Composer XE
  • Intel C++ Compiler XE
  • Intel Fortran Compiler XE
  • Intel Parallel Debugger Extension
  • Intel Integrated Performance Primitives
  • Intel Cilk Plus
  • Intel Parallel Building Blocks
  • Intel Threading Building Blocks
  • Intel Math Kernel Library
10 Seats

Architecture

  • System Management Workstation (SMW)
    • system administrator's console for managing a Cray system like monitoring, installing/upgrading software, controls the hardware, starting and stopping the XC30 system.
  • service nodes are classified in:
    • login nodes for users to access the system
    • boot nodes which provides the OS for all other nodes, licenses,...
    • network nodes which provides e.g. external network connections for the compute nodes
    • Cray Data Virtualization Service (DVS): is an I/O forwarding service that can parallelize the I/O transactions of an underlying POSIX-compliant file system.
    • sdb node for services like ALPS, torque, moab, slurm, cray management services,...
    • I/O nodes for e.g. lustre
    • MOM nodes for placing user jobs of the batch system in to execution
  • in future, the StorageSwitch Fabric of step2a and step1 will be connected. So, the Lustre workspace filesystems can be used on both hardware (Login servers and preprocessing servers) of step1 and step2a.

Step2a-concept.jpg