- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -
HPE Hawk
Introduction
|
Troubleshooting
|
Cluster Documentation
|
Programming / Utilities
|
Module environment
cf. here
Compiler
cf. here
MPI
Tuned MPI: In order to use the MPI implementation provided by HPE, please load the Message Passing Toolkit (MPT) module mpt (not ABI-compatible to other MPI implementations) or hmpt (ABI-compatible to MPICH-derivatives).
User Guide: For detailed information cf. the HPE Message Passing Interface (MPI) User Guide.
Performance optimization: With respect to MPI performance optimization by means of tuning environment variables please cf. Tuning of MPT
Libraries
cf. here
Batch System
cf. here
Disk storage
Home directories as well as workspaces are handled in the same way as on Hazel Hen, so please cf. Storage Description regarding details.
Pre- and post processing
Within HLRS simulation environment special nodes for pre- and post processing tasks are available. Such nodes could be requested via the batch system (follow this link for more info). Available nodes are
table... 4 nodes 2 TB Memory 2 Socket AMD ...x TB local storage shared usage model 1 Node 4 TB Memory 2 Socket AMD x TB local storage shared usage model
more specialized nodes e.g. graphics, vector, DataAnalytics, ... are available in the Vulcan cluster
Manuals
Processor:
- Software Optimization Guide for AMD EPYC Rome Processors
- Open-Source Register Reference for AMD EPYC Rome Processors
(in particular describing available hardware performance counters) - Software Optimization Guide for AMD Family 15h
(although depicting an older family of AMD processors, the optimization approaches shown in this document are also applicable to the AMD EPYC Rome processor deployed in Hawk)
MPI:
Batch system:
- User's Guide
- Reference Guide (cf. this with respect to a complete list of environment variables exported by PBS)