- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

HPE Hawk: Difference between revisions

From HLRS Platforms
Jump to navigationJump to search
No edit summary
No edit summary
Line 64: Line 64:
* [[CAE_utilities|CAE Utilities]]
* [[CAE_utilities|CAE Utilities]]
* [[MKL | MKL Fortran Interfaces ]]
* [[MKL | MKL Fortran Interfaces ]]
* [[FFTW | FFTW library usage ]]
|}
|}
</div>
</div>

Revision as of 14:28, 16 February 2020

Warning: HAWK is currently in the set up phase. For more details about the timing please see the Hawk installation schedule.
Warning: Please have in mind that the system is currently under construction. Hence, modifications might occur and the observed performance can vary.




Introduction


Troubleshooting




Documentation


Utilities



Help for Wiki Usage



Module environment

cf. here


Compiler

cf. here


MPI

Tuned MPI: In order to use the MPI implementation provided by HPE, please load the Message Passing Toolkit (MPT) module mpt (not ABI-compatible to other MPI implementations) or hmpt (ABI-compatible to MPICH-derivatives).

User Guide: For detailed information cf. the HPE Message Passing Interface (MPI) User Guide.

Performance optimization: With respect to MPI performance optimization by means of tuning environment variables please cf. Tuning of MPT


Libraries

cf. here



Batch System

cf. here



Disk storage

Home directories as well as workspaces are handled in the same way as on Hazel Hen, so please cf. Storage Description regarding details.


Pre- and post processing

Within HLRS simulation environment special nodes for pre- and post processing tasks are available. Such nodes could be requested via the batch system (follow this link for more info). Available nodes are

  table... 
    4 nodes 2 TB Memory 2 Socket AMD ...x TB local storage   shared usage model
    1 Node  4 TB Memory 2 Socket AMD    x TB local storage   shared usage model


more specialized nodes e.g. graphics, vector, DataAnalytics, ... are available in the Vulcan cluster


Manuals

Processor:


MPI:


Batch system: