- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

Big Data, AI Aplications and Frameworks

From HLRS Platforms
Revision as of 08:35, 11 April 2024 by Hpckkaya (talk | contribs) (Changes regarding Conda and Framework documentation)
Jump to navigationJump to search

This guide provides a technical description of the hardware and software environments for high-performance data analytics (HPDA) and AI applications.

Hardware and Storage Overview

AI and HPDA workflows can require local storage. However, HPC nodes usually do not have any local drive except for particular nodes. Local storage is available only on the nodes mentioned below. You can also use the ram disk mounted to /run/user/${UID}. For more information on the HOME and SCRATCH directories, please refer to the dedicated documentation for Hawk and Vulcan.

Warning: Ensure your application uses the correct paths for local files. /tmp is a minimal in-memory filesystem unless mounted as a local SSD.

Hawk

Hawk is primarily a CPU-based supercomputer, but its GPU partition fits HPDA and AI applications.

rome-ai partition contains 24 nodes and 192 GPUs. Resources per node:

  • CPU: 2x AMD EPYC 7742
  • GPU: 8x NVIDIA A100-SXM4
    • 20 nodes with the 40 GB version
    • 4 nodes with the 80 GB version
  • RAM: 1 TB
  • 15 TB local storage mounted to /localscratch

rome partition contains 5,632 nodes and 720,896 compute cores in total. Resources per node:

  • CPU: 2x AMD EPYC 7742
  • RAM: 256 GB

Vulcan

Vulcan has two dedicated partitions to accelerate AI and HPDA workloads.

clx-ai partition contains 4 nodes and 32 GPUs. Resources per node:

  • CPU: 2x Intel Xeon Gold 6240
  • GPU: 8x NVIDIA V100-SXM2 32 GB
  • RAM: 768 GB
  • 7.3 TB local storage mounted to /localscratch

clx-21 is an 8-node CPU-based partition with local storage. Resources per node:

  • CPU: 2x Intel Xeon Gold 6230
  • RAM: 384 GB
  • 1.9 TB local storage mounted to /localscratch

Software

The only way to access the compute nodes is by using the batch system from the login nodes. For more information, please refer to the dedicated documentation for Hawk and Vulcan.

Warning: For security reasons, the compute nodes have no internet connectivity.

Conda

Only the main and r channels are available using the Conda module. If you require custom Conda packages, our guide explains how to transfer local Conda environments to clusters. Additionally, the documentation demonstrates the use of the default Conda module for creating Conda environments.

Containers

Only udocker is available for security reasons since it can execute container runtimes without sudo permissions and user namespace support. Our documentation contains a guide explaining AI containers on GPU-accelerated partitions.

Frameworks

You can install PyTorch and TensorFlow in a custom Conda environment or container. Template project repositories are available at https://code.hlrs.de under the SiVeGCS organization for widely recognized data processing and machine learning frameworks, illustrating their usage on the HLRS systems.