- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

HPE Hawk: Difference between revisions

From HLRS Platforms
Jump to navigationJump to search
No edit summary
No edit summary
Line 73: Line 73:
[[Help | Help for Wiki Usage]]
[[Help | Help for Wiki Usage]]


== Module environment ==
cf. [[Module environment(Hawk)|here]]
<br>
== Compiler ==
cf. [[Compiler(Hawk)|here]]
<br>
== MPI ==
'''Tuned MPI''': In order to use the MPI implementation provided by HPE, please load the Message Passing Toolkit (MPT) module ''mpt'' (not ABI-compatible to other MPI implementations) or ''hmpt'' (ABI-compatible to MPICH-derivatives).
'''User Guide''': For detailed information cf. the [http://www.hpe.com/support/mpi-ug-036 HPE Message Passing Interface (MPI) User Guide].
'''Performance optimization''': With respect to MPI performance optimization by means of tuning environment variables please cf.
[https://kb.hlrs.de/platforms/upload/Tuning_of_MPT.pdf Tuning of MPT]
<br>
== Libraries ==
cf. [[Libraries(Hawk)|here]]
<br>
== Batch System ==
cf. [[Batch_System_PBSPro_(Hawk)|here]]
<br>





Revision as of 14:32, 16 February 2020

Warning: HAWK is currently in the set up phase. For more details about the timing please see the Hawk installation schedule.
Warning: Please have in mind that the system is currently under construction. Hence, modifications might occur and the observed performance can vary.




Introduction


Troubleshooting




Documentation


Utilities



Help for Wiki Usage


Disk storage

Home directories as well as workspaces are handled in the same way as on Hazel Hen, so please cf. Storage Description regarding details.


Pre- and post processing

Within HLRS simulation environment special nodes for pre- and post processing tasks are available. Such nodes could be requested via the batch system (follow this link for more info). Available nodes are

  table... 
    4 nodes 2 TB Memory 2 Socket AMD ...x TB local storage   shared usage model
    1 Node  4 TB Memory 2 Socket AMD    x TB local storage   shared usage model


more specialized nodes e.g. graphics, vector, DataAnalytics, ... are available in the Vulcan cluster


Manuals

Processor:


MPI:


Batch system: