- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

CRAY XE6 Hardware and Architecture: Difference between revisions

From HLRS Platforms
Jump to navigationJump to search
m (Opteron type)
mNo edit summary
Line 3: Line 3:
* 3552 [https://fs.hlrs.de/projects/craydoc/docs/books/S-2496-31/html-S-2496-31/appendix.3.IcvG1LiI.html#figure-8zm3e3cz '''compute nodes'''] / 113.664 cores
* 3552 [https://fs.hlrs.de/projects/craydoc/docs/books/S-2496-31/html-S-2496-31/appendix.3.IcvG1LiI.html#figure-8zm3e3cz '''compute nodes'''] / 113.664 cores
** dual socket G34
** dual socket G34
*** 2x [http://www.amd.com/de/products/server/processors/6000-series-platform/Pages/6000-series-platform.aspx AMD Opteron(tm)] 6276 (Interlagos) processors with 16 Cores @ 2.3 GHz (supporting TurboCore)
*** 2x [http://www.amd.com/de/products/server/processors/6000-series-platform/Pages/6000-series-platform.aspx AMD Opteron(tm)] 6276 (Interlagos) processors with 16 Cores @ 2.3 GHz (with TurboCore up to 3.3 GHz)
*** 32MB L2+L3 Cache, 16MB L3 Cache
*** 32MB L2+L3 Cache, 16MB L3 Cache
*** HyperTransport HT3, 6.4GT/s=102.4 GB/s
*** HyperTransport HT3, 6.4GT/s=102.4 GB/s
*** per socket ~150 GFLOP/s
*** per socket ~150 GFLOP/s
** standard of 32GB RAM; 480 nodes equipped with 64GB memory  
** standard of 32GB RAM; 480 nodes equipped with 64GB memory (total of 126TB)
* 96 '''service nodes''' (Network nodes, mom nodes, router nodes, DVS nodes, boot, database, syslog)  
* 96 '''service nodes''' (Network nodes, mom nodes, router nodes, DVS nodes, boot, database, syslog)  
* High Speed Network '''CRAY Gemini'''
* High Speed Network '''CRAY Gemini'''
* users HOME filesystem ~60TB (BLUEARC mercury 55)
* users HOME filesystem:
* Parallel Filesystem (workspace)
** ~60TB (BLUEARC mercury 55)
**  Lustre
* workspace filesystem:
**  Lustre parallel filesystem
** capacity 2.7 PB realized with 16 DDN SFA10K controllers
** capacity 2.7 PB realized with 16 DDN SFA10K controllers
** IO bandwith ~150GB/s
** IO bandwith ~150GB/s

Revision as of 16:07, 24 October 2011

Hardware of Installation step 1

Summary

  • 3552 compute nodes / 113.664 cores
    • dual socket G34
      • 2x AMD Opteron(tm) 6276 (Interlagos) processors with 16 Cores @ 2.3 GHz (with TurboCore up to 3.3 GHz)
      • 32MB L2+L3 Cache, 16MB L3 Cache
      • HyperTransport HT3, 6.4GT/s=102.4 GB/s
      • per socket ~150 GFLOP/s
    • standard of 32GB RAM; 480 nodes equipped with 64GB memory (total of 126TB)
  • 96 service nodes (Network nodes, mom nodes, router nodes, DVS nodes, boot, database, syslog)
  • High Speed Network CRAY Gemini
  • users HOME filesystem:
    • ~60TB (BLUEARC mercury 55)
  • workspace filesystem:
    • Lustre parallel filesystem
    • capacity 2.7 PB realized with 16 DDN SFA10K controllers
    • IO bandwith ~150GB/s
  • special user nodes:
    • external login servers
    • pre-post processing and visualization nodes
      • 128GB memory
      • one node comes with 1TB memory
      • local disks
      • Powerful Graphic Cards with GP-GPU post processing support
      • direct access to parallel filesytem
  • infrastructure servers

Architecture

  • System Management Workstation (SMW)
    • system administrator's console for managing a Cray system like monitoring, installing/upgrading software, controls the hardware, starting and stopping the XE6 system.
  • service nodes are classified in:
    • login nodes for users to access the system
    • boot nodes which provides the OS for all other nodes, licenses,...
    • network nodes which provides e.g. external network connections for the compute nodes
    • Cray Data Virtualization Service (DVS): is an I/O forwarding service that can parallelize the I/O transactions of an underlying POSIX-compliant file system.
    • sdb node for services like ALPS, torque, moab, cray management services,...
    • I/O nodes for e.g. lustre
    • MOM (torque) nodes for placing user jobs of the batch system in to execution
  • compute nodes and pre-post processing nodes
    • are only available for user using the batch system and the Application Level Placement Scheduler (ALPS), see running applications.
      • There are compute nodes with 32 GB memory and 64 GB memory available each with fast interconnect (CRAY Gemini)
      • The pre- and postprocessing/visualization infrastructure aims to support users with
        • complex workflows and advanced access methods
        • remote graphics rendering simulation steering in order to minimize data move operations.


Conceptual Architecture

Conceptual Architecture of Hermit.jpg

AMD Opteron 6200 Series Processor (Interlagos)

Interlagos.jpg

AMD Turbo Core technology

AMD Turbo Core technology.jpg

Storage Solution for Hermit installation step 1

Hermit1 storage solution.jpg


Pre-Postprocessing Visualization Server

Hermit PrePostprocessingVisualization.jpg

Software Features

  • Cray Linux Environment (CLE) 4.? operating system
  • Operating System is based on SUSE Linux Enterprise Server (SLES) 11
  • Cray Gemini interconnection network
  • Cluster Compatibility Mode (CCM) functionality enables cluster-based independent software vendor (ISV) applications to run without modification on Cray systems.
  • Batch System: torque, moab
  • many development tools available:


Pictures of installation step 1

Hermit1-Folie1.jpg Hermit1-Folie2.jpg Hermit1-Folie3.jpg Hermit1-Folie4.jpg Hermit1-Folie5.jpg Hermit1-Folie6.jpg Hermit1-Folie7.jpg Hermit1-Folie8.jpg Hermit1-Folie9.jpg