- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -
HPE Hunter Hardware and Architecture: Difference between revisions
No edit summary |
|||
Line 6: | Line 6: | ||
* Blade: HPE Cray EX255a (El Capitan blade architecture, MI-300A) | * Blade: HPE Cray EX255a (El Capitan blade architecture, MI-300A) | ||
* APU: AMD Instinct MI300A Accelerator | * APU: AMD Instinct MI300A Accelerator | ||
* 4 APU's | * 4 APU's per node | ||
* Memory: 512 GB | * 24 CPU Cores and 228 Compute Cores per APU | ||
* Network: HPE Slingshot 11 (4x200Gbps) | * Memory: 512 GB per node (HBM3) | ||
* Number of nodes | * HBM3 ~5.3 TB/s per APU | ||
* Network: HPE Slingshot 11 (4 injection ports per node, 4x200Gbps) | |||
* Number of nodes: 188 | |||
* some nodes have 1 local NVMe M.2 SSD installed. | |||
==== hunter CPU compute nodes ==== | ==== hunter CPU compute nodes ==== | ||
==== Pre- and post processing ==== | ==== Pre- and post processing ==== |
Revision as of 19:52, 9 January 2025
With respect to some technical details of Hunter Hardware and Architecture, please refer to this Slides.
Summary
Node/Processor
hunter APU compute nodes
- Blade: HPE Cray EX255a (El Capitan blade architecture, MI-300A)
- APU: AMD Instinct MI300A Accelerator
- 4 APU's per node
- 24 CPU Cores and 228 Compute Cores per APU
- Memory: 512 GB per node (HBM3)
- HBM3 ~5.3 TB/s per APU
- Network: HPE Slingshot 11 (4 injection ports per node, 4x200Gbps)
- Number of nodes: 188
- some nodes have 1 local NVMe M.2 SSD installed.
hunter CPU compute nodes
Pre- and post processing
Within HLRS simulation environment special nodes for pre- and post processing tasks are available. Such nodes could be requested via the batch systems using the queue "pre" or queue "smp". Available nodes are
4 nodes 3 TB Memory, 2 Socket AMD EPYC 9354 32-Core Processor, exclusive usage model, available by queue "pre" 1 node 6 TB Memory, 2 Socket AMD EPYC 9354 32-Core Processor, shared usage model, available by queue "smp"
more specialized nodes e.g. graphics, vector, DataAnalytics, ... are available in the Vulcan cluster.
If you need such specialized nodes on vulcan cluster for pre- or postprocessing inside your project located on hunter resources, please ask your project manager for access to vulcan.
Interconnect
HPE Slingshot 11 Dragonfly
Filesystem
Available lustre filesystems on hunter:
- ws12:
- available storage capacity: ~12 PB
- lustre devices: 2 MDT, 20 OST
- performance:
additional an central HOME and project fileserver is also mounted on hunter. Some special nodes have a local disk installed which can be uses as localscratch.
See also Storage (Hunter)