- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

Compiler(Hawk)

From HLRS Platforms
Revision as of 11:44, 4 February 2020 by Hpcbjdic (talk | contribs)
Jump to navigationJump to search

In order to build MPI applications, please us the compiler wrappers mpif77 / mpif90 / mpif08 / mpicc / mpicxx.

Please note that compilers do not use optimization flags by default at the moment. Hence, please refer to Compiler Options Quick Reference Guide and set the respective flags on your own (with znver1 for Naples and znver2 for Rome nodes). Compiler Usage Guidelines for AMD64 Platforms might also be a source of inspiration w.r.t. optimization flags.


Available compilers

We highly recommend to try as much different compilers as possible and compare the performance of the generated code! If you code according to language standards, this is almost for free but can give you a significant speedup! There is no such thing as an "ideal" compiler! One suites better to application A, one suites better to application B (cf. Best Practice Guide AMD EPYC (Naples)).


GNU

Make sure to load a more up to date version of the GNU Compiler Collection than the one preinstalled in the system

module load compiler/gnu/9.1.0

Then compile with

<compiler> -march=znver2


AOCC

AOCC is the AMD Optimizing C/C++ Compiler based on LLVM. It contains a Fortran compiler (flang) as well.

Load aocc module

module load compiler/aocc/2.0.0

Compile with

clang/clang++/flang -march=znver2

AOCC comes with a couple of exclusive compiler flags that are not part of LLVM and allow more aggressive optimizations, they are listed in the C/C++ and Fortran compiler manual.


Intel

Please use

<compiler> -march=core-avx2

and do not use

<compiler> -xCORE-AVX2

since the latter might give very bad performance!


PGI

With respect to PGI, we recommend to use

<compiler> -tp=zen -O3

Compiler Options for High Performance Computing

This section shows compiler flags for GNU-compatible compilers (gnu, aocc, intel), other compilers may have other options for the described functionality.


Static Linking

Large jobs with thousands of processes can overload the file systems connected to the cluster during startup if the binary is linked to (many) shared libraries that are stored on these file systems.

To avoid this issue and to also improve the performance by reducing the overhead from function calls from shared libraries, compiling dependencies statically is recommended.

During link-time, you can set the compiler to look for static libraries instead of shared libraries in the library search path with

# Link libhdf5 + zlib statically, set back to look for shared libraries again after (default)
<compiler> ... -Wl,-Bstatic -lhdf5_fortran -lhdf5_f90cstub -lhdf5 -lz -Wl,-Bdynamic

You can also specify a static library filename in the library search path directly

# Staticaclly link hdf5 + zlib
<compiler> ... -l:libhdf5_fortran.a -l:libhdf5_f90cstub.a -l:libhdf5.a -l:libz.a

Or provide the full path to the static library like with other object files

# Staticaclly link hdf5 + zlib
<compiler> ... /path/to/static/lib/libhdf5_fortran.a /path/to/static/lib/libhdf5_f90cstub.a /path/to/static/lib/libhdf5.a /path/to/static/lib/libz.a

Keep in mind that all the symbols referenced in the static library need to be resolved during linking. Thus, linking to additional (static) libraries may be required. In some cases the order of the linked static libraries is important, as with the hdf5 example above.


Link-Time Optimization (LTO)

This technique allows the compiler to optimize the code at link time. During this, further rearrangement of the code from separate object files is performed.

An article about LTO performance comparison with GCC 10: https://www.phoronix.com/scan.php?page=article&item=gcc10-lto-tr

The option needs to be set at compile time and link time:

# Compile with LTO in mind
<compiler> -flto -o component1.o -c component1.c
<compiler> -flto -o component2.o -c component2.c

# Link with LTO
<compiler> -flto -o program component1.o component2.o

Keep in mind LLVM(AOCC) compiles LLVM bitcode files instead of ELF object files when using LTO. Using tools like objdump, readelf, strip, etc. on these files won't work.

More information here: https://www.llvm.org/docs/LinkTimeOptimization.html


Profile Guided Optimization (PGO)

This optimization can lead to a 10-20% boost in performance in some cases. It basically collects information about how the program actually runs and improves the assumptions made about which code paths are more likely to be taken.

An article about PGO performance comparison with GCC 10: https://www.phoronix.com/scan.php?page=news_item&px=GCC-10-PGO-3960X-Xmas-Eve

This requires the code to be compiled twice and the program being run with a representative use-case in-between.

A good example for GCC can be found here:
https://developer.ibm.com/articles/gcc-profile-guided-optimization-to-accelerate-aix-applications/

PGO documentation for LLVM:
https://clang.llvm.org/docs/UsersManual.html#profiling-with-instrumentation

PGO documentation for the Intel Compiler:
https://software.intel.com/en-us/cpp-compiler-developer-guide-and-reference-profile-guided-optimization-pgo