|
|
(34 intermediate revisions by 7 users not shown) |
Line 1: |
Line 1: |
| For the Hawk installation schedule please see [[https://kb.hlrs.de/platforms/index.php/Hawk_installation_schedule Hawk installation schedule]].
| |
|
| |
|
| <br>
| | {{Note |
| | | text = Please be sure to read at least the [[10_minutes_before_the_first_job]] document and consult the [[General HWW Documentation]] before you start to work with any of our systems. |
| | }} |
|
| |
|
| <font color=red>'''If your job does not start, please have in mind the time-dependent limitations according to [[Batch_System_PBSPro_(Hawk)#time-dependent limitations|Batch System]]!'''</font>
| | {{Warning |
| | | text = In prepartion of the next generation supercomputer [[ Hunter_(HPE) | Hunter ]], the hardware configuration has been reduced (from 5632 compute nodes to 4096 compute nodes). Workspace filesystem ws10 has been removed. |
| | }} |
|
| |
|
| This Page is under construction!
| |
|
| |
|
| The information below applies to the Test and Development System (TDS) which is similar to the future Hawk production system. Please have in mind that this is a system under construction. Hence modifications might occur ''without'' announcement and stuff may not work as expected from time to time!
| | ---- |
|
| |
|
|
| |
|
| == Hardware == | | {| style="border:0; margin: 0;" width="100%" cellspacing="10" |
| === Node/Processor ===
| |
|
| |
|
| Compute nodes as well as login nodes are equipped with
| | | valign="top" style="padding: 0; border: 1px solid #aaaaaa; margin-bottom: 0;" | |
| AMD EPYC 7702 64-Core Processor
| | <div style="font-size: 105%; padding: 0.4em; background-color: #eeeeee; border-bottom: 1px solid #aaaaaa; text-align: center;">'''Introduction'''</div> |
| detailed information will be provided later. Please check for additional infos [https://www.amd.com/de/products/cpu/amd-epyc-7702 AMD Rome 7702]
| | <div style="background: #ffffff; padding:0.2em 0.4em;"> |
| With respect to node and processor details cf. here.
| | {| style="border: 0; margin: 0;" cellpadding="3" |
| | | valign="top" | |
| | <!-- * [[Hawk_installation_schedule#Terms_of_Use | Terms of use ]] --> |
| | * [[HPE_Hawk_access|Access]] |
| | * [[HPE_Hawk_Hardware_and_Architecture|Hardware and Architecture]] |
| | |} |
| | </div> |
|
| |
|
| <br>
| |
|
| |
|
| === Interconnect ===
| |
| Hawk deploys an Infiniband HDR based interconnect with a 9-dimensional enhanced hypercube topology. Please refer to [https://kb.hlrs.de/platforms/upload/Interconnect_topology.pdf here] with respect to the latter. Infiniband HDR has a bandwidth of 200 Gbit/s and a MPI latency of ~1.3us per link. The full bandwidth of 200 Gbit/s can be used if communicating between the 16 nodes connected to the same node of the hypercube (cf. [https://kb.hlrs.de/platforms/upload/Interconnect_topology.pdf here]). Within the hypercube, the higher the dimension, the less bandwidth is available.
| |
| Topology aware scheduling is used to exclude major performance fluctuations. This means that larger jobs can only be requested with defined node numbers (64, 128, 256, 512, 1024, 2048 and 4096) in regular operation. This restriction ensures optimal system utilization while simultaneously exploiting the network topology. Jobs with a node number of < 128 nodes are processed in a special partition. Jobs over 4096 nodes are processed at special times.
| |
|
| |
|
| <br> | | | valign="top" style="padding: 0; border: 1px solid #aaaaaa; margin-bottom: 0;" | |
| | <div style="font-size: 105%; padding: 0.4em; background-color: #eeeeee; border-bottom: 1px solid #aaaaaa; text-align: center;">'''Troubleshooting'''</div> |
| | <div style="background: #ffffff; padding:0.2em 0.4em;"> |
| | {| style="border: 0; margin: 0;" cellpadding="3" |
| | | valign="top" | |
| | * [[HPE_Hawk_Support|Support (contact/staff)]] |
| | * [[HPE_Hawk_FAQ|FAQ]] |
| | * [http://websrv.hlrs.de/cgi-bin/hwwweather?task=viewmachine&machine=hawk Status,Maintenance for hawk] |
| | * [[HPE_Hawk_News|News]] |
| | |} |
| | </div> |
|
| |
|
| === Filesystem ===
| |
|
| |
|
| <br>
| | |} |
|
| |
|
| == Access ==
| |
|
| |
|
| '''Login-Node''': hawk-tds-login1.hww.hlrs.de
| |
| {{note|text=Access to the Hawk TDS is possible now on request. In case you have early access, we ask you to provide us with your experience regarding usage and performance (approximately half a page) once a month.}}
| |
|
| |
|
| <br>
| |
|
| |
|
| == Module environment ==
| |
| cf. [[Module environment(Hawk)|here]]
| |
|
| |
|
| <br>
| | {| style="border:0; margin: 0;" width="100%" cellspacing="10" |
|
| |
|
| == Pre- and post processing == | | | valign="top" style="padding: 0; border: 1px solid #aaaaaa; margin-bottom: 0;" | |
| | <div style="font-size: 105%; padding: 0.4em; background-color: #eeeeee; border-bottom: 1px solid #aaaaaa; text-align: center;">'''Documentation'''</div> |
| | <div style="background: #ffffff; padding:0.2em 0.4em;"> |
| | {| style="border: 0; margin: 0;" cellpadding="3" |
| | | valign="top" | |
| | * [[Batch_System_PBSPro_(Hawk)|Batch System]] |
| | * [[Module environment(Hawk)|Module Environment]] |
| | * [[Storage_(Hawk)| Storage Description ]] |
| | * [[Compiler(Hawk)|Compiler]] |
| | * [[MPI(Hawk)|MPI]] |
| | * [[Libraries(Hawk)|Libraries]] |
| | * [[Manuals(Hawk)|Manuals]] |
| | * [[Optimization|Optimization]] |
| | * [[Hawk_PrePostProcessing|Pre- and Post-Processing]] |
| | * [[Big_Data,_AI_Aplications_and_Frameworks|Big Data, AI Applications and Frameworks]] |
| | * [[Performance Analysis Tools]] |
| | * [[CPE|Cray Programming Environment (CPE)]] |
|
| |
|
| Within HLRS simulation environment special nodes for pre- and post processing tasks are available. Such nodes could be requested via the batch system (follow this link for more info).
| | |} |
| Available nodes are
| | </div> |
| table...
| |
| 4 nodes 2 TB Memory 2 Socket AMD ...x TB local storage shared usage model
| |
| 1 Node 4 TB Memory 2 Socket AMD x TB local storage shared usage model
| |
|
| |
|
|
| |
|
| more specialized nodes e.g. graphics, vector, DataAnalytics, ... are available in the [[NEC_Cluster_Hardware_and_Architecture_(vulcan)|Vulcan cluster]]
| |
|
| |
|
| == Compiler == | | | valign="top" style="padding: 0; border: 1px solid #aaaaaa; margin-bottom: 0;" | |
| cf. [[Compiler(Hawk)|here]]
| | <div style="font-size: 105%; padding: 0.4em; background-color: #eeeeee; border-bottom: 1px solid #aaaaaa; text-align: center;">'''Utilities'''</div> |
| | <div style="background: #ffffff; padding:0.2em 0.4em;"> |
| | {| style="border: 0; margin: 0;" cellpadding="3" |
| | | valign="top" | |
| | * [[CAE_utilities|CAE Utilities]] |
| | * [[CAE_howtos|CAE HOWTOs]] |
| | * [[MKL | MKL Fortran Interfaces ]] |
| | |} |
| | </div> |
|
| |
|
| <br>
| | |} |
|
| |
|
| == MPI ==
| |
|
| |
|
| '''Tuned MPI''': In order to use the MPI implementation provided by HPE, please load the Message Passing Toolkit (MPT) module ''mpt'' (not ABI-compatible to other MPI implementations) or ''hmpt'' (ABI-compatible to MPICH-derivatives).
| | ---- |
| | | [[Help | Help for Wiki Usage]] |
| '''User Guide''': For detailed information cf. the [http://www.hpe.com/support/mpi-ug-036 HPE Message Passing Interface (MPI) User Guide].
| |
| | |
| '''Performance optimization''': With respect to MPI performance optimization by means of tuning environment variables please cf.
| |
| [https://kb.hlrs.de/platforms/upload/Tuning_of_MPT.pdf Tuning of MPT]
| |
| | |
| <br>
| |
| | |
| == Libraries ==
| |
| | |
| cf. [[Libraries(Hawk)|here]]
| |
| | |
| <br>
| |
| | |
| | |
| == Batch System ==
| |
| | |
| cf. [[Batch_System_PBSPro_(Hawk)|here]]
| |
| | |
| <br>
| |
| | |
| | |
| == Disk storage ==
| |
| | |
| Home directories as well as workspaces are handled in the same way as on Hazel Hen, so please cf. [[CRAY_XC40_Disk_Storage | Storage Description ]] regarding details.
| |
| | |
| <br>
| |
| | |
| | |
| == Manuals ==
| |
| '''Processor''':
| |
| * [https://developer.amd.com/wp-content/resources/56305_SOG_3.00_PUB.pdf Software Optimization Guide for AMD EPYC Rome Processors]
| |
| * [https://developer.amd.com/wp-content/resources/56255_3_03.PDF Open-Source Register Reference for AMD EPYC Rome Processors] <br> (in particular describing available hardware performance counters)
| |
| * [https://www.amd.com/system/files/TechDocs/47414_15h_sw_opt_guide.pdf Software Optimization Guide for AMD Family 15h] <br> (although depicting an older family of AMD processors, the optimization approaches shown in this document are also applicable to the AMD EPYC Rome processor deployed in Hawk)
| |
| | |
| <br>
| |
| | |
| '''MPI''':
| |
| * [http://www.hpe.com/support/mpi-ug-036 HPE Message Passing Interface (MPI) User Guide]
| |
| | |
| <br>
| |
| | |
| '''Batch system''':
| |
| * [https://www.altair.com/pdfs/pbsworks/PBSUserGuide19.2.3.pdf User's Guide]
| |
| * [https://www.altair.com/pdfs/pbsworks/PBSReferenceGuide19.2.3.pdf Reference Guide] (cf. this with respect to a complete list of environment variables exported by PBS)
| |