- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -

CRAY XE6 Hardware and Architecture

From HLRS Platforms
Jump to navigationJump to search

Hardware of Installation step 1


  • 3552 compute nodes / 113.664 cores
    • dual socket G34
      • 2x AMD Opteron(tm) 6276 (Interlagos) processors with 16 Cores @ 2.3 GHz (with TurboCore up to 3.3 GHz)
      • 32MB L2+L3 Cache, 16MB L3 Cache
      • HyperTransport HT3, 6.4GT/s=102.4 GB/s
      • Peak performance: 2.3*4*16=147.2 GFLOP/s per socket, 294.4 GFLOP/s per node, about 1 PFLOP/s (1045708.8 GFLOP/s) for the whole system
    • standard of 32GB RAM; 480 nodes equipped with 64GB memory (total of 126TB)
  • 96 service nodes (Network nodes, mom nodes, router nodes, DVS nodes, boot, database, syslog)
  • High Speed Network CRAY Gemini
  • users HOME filesystem:
    • ~60TB (BLUEARC mercury 55)
  • workspace filesystem:
    • Lustre parallel filesystem
    • capacity 2.7 PB realized with 16 DDN SFA10K controllers
    • IO bandwith ~150GB/s
  • special user nodes:
    • external login servers
    • pre-post processing and visualization nodes
      • 128GB memory
      • one node comes with 1TB memory
      • local disks
      • Powerful Graphic Cards with GP-GPU post processing support
      • direct access to parallel filesytem
  • infrastructure servers


  • System Management Workstation (SMW)
    • system administrator's console for managing a Cray system like monitoring, installing/upgrading software, controls the hardware, starting and stopping the XE6 system.
  • service nodes are classified in:
    • login nodes for users to access the system
    • boot nodes which provides the OS for all other nodes, licenses,...
    • network nodes which provides e.g. external network connections for the compute nodes
    • Cray Data Virtualization Service (DVS): is an I/O forwarding service that can parallelize the I/O transactions of an underlying POSIX-compliant file system.
    • sdb node for services like ALPS, torque, moab, cray management services,...
    • I/O nodes for e.g. lustre
    • MOM (torque) nodes for placing user jobs of the batch system in to execution
  • compute nodes and pre-post processing nodes
    • are only available for user using the batch system and the Application Level Placement Scheduler (ALPS), see running applications.
      • There are compute nodes with 32 GB memory and 64 GB memory available each with fast interconnect (CRAY Gemini)
      • The pre- and postprocessing/visualization infrastructure aims to support users with
        • complex workflows and advanced access methods
        • remote graphics rendering simulation steering in order to minimize data move operations.

Conceptual Architecture

Conceptual Architecture of Hermit.jpg

AMD Opteron 6200 Series Processor (Interlagos)


AMD Turbo Core technology

AMD Turbo Core technology.jpg

Storage Solution for Hermit installation step 1

Hermit1 storage solution.jpg

Pre-Postprocessing Visualization Server

Hermit PrePostprocessingVisualization.jpg

Software Features

  • Cray Linux Environment (CLE) 4 operating system
  • Operating System is based on SUSE Linux Enterprise Server (SLES) 11
  • Cray Gemini interconnection network
  • Cluster Compatibility Mode (CCM) functionality enables cluster-based independent software vendor (ISV) applications to run without modification on Cray systems.
  • Batch System: torque, moab
  • many development tools available:

Pictures of installation step 1

Hermit1-Folie1.jpg Hermit1-Folie2.jpg Hermit1-Folie3.jpg Hermit1-Folie4.jpg Hermit1-Folie5.jpg Hermit1-Folie6.jpg Hermit1-Folie7.jpg Hermit1-Folie8.jpg Hermit1-Folie9.jpg