- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

CRAY XC40 Getting Started: Difference between revisions

From HLRS Platforms
Jump to navigationJump to search
No edit summary
No edit summary
Line 7: Line 7:
This guide is intended to be used in conjunction with Workload Management and Application Placement for the Cray Linux Environment (S–2496), which describes the Application Level Placement Scheduler (ALPS) and aprun command in considerably greater detail.
This guide is intended to be used in conjunction with Workload Management and Application Placement for the Cray Linux Environment (S–2496), which describes the Application Level Placement Scheduler (ALPS) and aprun command in considerably greater detail.


The information contained in this guide is of necessity fairly high-level and generalized, as the Cray platform supports a wide variety of hardware nodes as well as many different compilers, debuggers, and other software tools. Therefore, system hardware and software configurations vary considerably from site to site. For specific information about your site and its installed hardware, software, and usage policies, contact your site administrator.





Revision as of 09:32, 29 May 2013

Welcome to HWW System CRAY XC30 named HORNET


This guide describes the software environment and tools used to develop, debug, and run applications on Cray XT, Cray XE, Cray XK, and Cray XC30 systems. It is intended as a general overview and introduction to the Cray system for new users and application programmers.

This guide is intended to be used in conjunction with Workload Management and Application Placement for the Cray Linux Environment (S–2496), which describes the Application Level Placement Scheduler (ALPS) and aprun command in considerably greater detail.







Notice!

The frontend nodes are accessible as hermit1.hww.de and are intended as single point of access to the entire cluster. Here you can set your environment, move your data, edit and compile your programs and create batch scripts. Interactive usage like run your program which leads to a high load is NOT allowed on the frontend/login node.
The compute nodes for running parallel jobs are only available through the Batch system !