- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

Big Data, AI Aplications and Frameworks

From HLRS Platforms
Revision as of 13:08, 6 October 2023 by Hpckkaya (talk | contribs) (Updated hardware, storage, and software descriptions, inserting links to the how-to guides for using custom frameworks.)
Jump to navigationJump to search

This guide provides a technical description of the hardware and software environments for high-performance data analytics (HPDA) and AI applications.

Hardware and Storage Overview

AI and HPDA workflows can require local storage. However, HPC nodes usually do not have any local drive except for particular nodes. Local storage is available only on the nodes mentioned below. You can also use the ram disk mounted to /run/user/${UID}. For more information on the HOME and SCRATCH directories, please refer to the dedicated documentation for Hawk and Vulcan.

Warning: Ensure your application uses the correct paths for local files. /tmp is a minimal in-memory filesystem unless mounted as a local SSD.

Hawk

Hawk is primarily a CPU-based supercomputer, but its GPU partition fits HPDA and AI applications.

rome-ai partition contains 24 nodes and 192 GPUs. Resources per node:

  • CPU: 2x AMD EPYC 7742
  • GPU: 8x NVIDIA A100-SXM4
    • 20 nodes with the 40 GB version
    • 4 nodes with the 80 GB version
  • RAM: 1 TB
  • 15 TB local storage mounted to /localscratch

rome partition contains 5,632 nodes and 720,896 compute cores in total. Resources per node:

  • CPU: 2x AMD EPYC 7742
  • RAM: 256 GB

Vulcan

Vulcan has two dedicated partitions to accelerate AI and HPDA workloads.

clx-ai partition contains 4 nodes and 32 GPUs. Resources per node:

  • CPU: 2x Intel Xeon Gold 6240
  • GPU: 8x NVIDIA V100-SXM2 32 GB
  • RAM: 768 GB
  • 7.3 TB local storage mounted to /localscratch

clx-21 is an 8-node CPU-based partition with local storage. Resources per node:

  • CPU: 2x Intel Xeon Gold 6230
  • RAM: 384 GB
  • 1.9 TB local storage mounted to /localscratch

Software

The only way to access the compute nodes is by using the batch system from the login nodes. For more information, please refer to the dedicated documentation for Hawk and Vulcan.

Warning: For security reasons, the compute nodes have no internet connectivity.

Conda

Only the main and r channels are available using the conda module. If you need custom conda packages, a guide shows how to move local conda environments to the clusters.

Containers

Only udocker is available for security reasons since it can execute container runtimes without sudo permissions and user namespace support. Our documentation contains a guide explaining AI containers on GPU-accelerated partitions.

Frameworks

You can install PyTorch and Tensorflow in a custom conda environment or container. In addition, our documentation has a guide for launching a Ray cluster on HLRS compute platforms.