- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -
CRAY XC40 Hardware and Architecture: Difference between revisions
From HLRS Platforms
Jump to navigationJump to search
Line 95: | Line 95: | ||
== Architecture == | |||
* System Management Workstation (SMW) | |||
** system administrator's console for managing a Cray system like monitoring, installing/upgrading software, controls the hardware, starting and stopping the XE6 system. | |||
* service nodes are classified in: | |||
** login nodes for users to [[CRAY_XC30_access| access]] the system | |||
** boot nodes which provides the OS for all other nodes, licenses,... | |||
** network nodes which provides e.g. external network connections for the compute nodes | |||
** Cray Data Virtualization Service (DVS): is an I/O forwarding service that can parallelize the I/O transactions of an underlying POSIX-compliant file system. | |||
** sdb node for services like ALPS, torque, moab, slurm, cray management services,... | |||
** I/O nodes for e.g. lustre | |||
** MOM nodes for placing user jobs of the batch system in to execution | |||
* compute nodes | |||
** are only available for user using the [[CRAY_XC30_Using_the_Batch_System_slurm | batch system]] and the Application Level Placement Scheduler (ALPS), see [http://docs.cray.com/cgi-bin/craydoc.cgi?mode=View;id=S-2496-4001;right=/books/S-2496-4001/html-S-2496-4001//cnl_apps.html running applications]. | |||
*** The compute nodes are installed with 64 GB memory, each with fast interconnect (CRAY Aries) |
Revision as of 09:10, 4 June 2013
Installation step 2a (hornet)
Summary Phase 1 Step 2a
Cray Cascade Supercomputer | Step 2a |
---|---|
Cray Cascade Cabinets | 1 |
Number of Compute Nodes | 164 |
Number of Compute Processors | 328 Intel SandyBridge 2,6 GHz, 8 Cores |
Compute Memory on Scalar Processors
|
DDR3 1600 MHz |
I/O Nodes | 14 |
Interconnect | Cray Aries |
External Login Servers | 2 |
Pre- and Post-Processing Servers | - |
User Storage
|
(330TB) |
Cray Linux Environment (CLE)
|
Yes |
PGI Compiling Suite (FORTRAN, C, C++) including Accelerator | 25 user (shared with Step 1) |
Cray Developer Toolkit
|
Unlimited Users |
Cray Programming Environment
|
Unlimited Users |
Alinea DDT Debugger | 2048 Processes (shared with Step 1) |
Lustre Parallel Filesystem | Licensed on all Sockets |
Intel Composer XE
|
10 Seats |
Architecture
- System Management Workstation (SMW)
- system administrator's console for managing a Cray system like monitoring, installing/upgrading software, controls the hardware, starting and stopping the XE6 system.
- service nodes are classified in:
- login nodes for users to access the system
- boot nodes which provides the OS for all other nodes, licenses,...
- network nodes which provides e.g. external network connections for the compute nodes
- Cray Data Virtualization Service (DVS): is an I/O forwarding service that can parallelize the I/O transactions of an underlying POSIX-compliant file system.
- sdb node for services like ALPS, torque, moab, slurm, cray management services,...
- I/O nodes for e.g. lustre
- MOM nodes for placing user jobs of the batch system in to execution
- compute nodes
- are only available for user using the batch system and the Application Level Placement Scheduler (ALPS), see running applications.
- The compute nodes are installed with 64 GB memory, each with fast interconnect (CRAY Aries)
- are only available for user using the batch system and the Application Level Placement Scheduler (ALPS), see running applications.