- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

Score-P

From HLRS Platforms
Revision as of 10:48, 5 July 2023 by Hpcjohig (talk | contribs) (add hints for scout options)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search
The Score-P instrumentation infrastructure allows tracing and sampling of MPI and Open MP parallel applications. Among others, it is used to generate traces in the otf2 format for the Tracec viewer Vampir and profiling records in the cubex format for the CUBE visualizer.
Developer: ZIH TU Dresden and JSC/FZ Juelich
Platforms: Vulcan, HPE_Hawk
Category: Performance Analyzer
License: BSD License
Website: Score-P homepage


Introduction

Analyzing an application with Score-P is done in multiple steps:

  1. Compiling the application with the scorep wrappercompiler
  2. Running the instrumented application
  3. Analyzing the performance records with CUBE for profiles or with Vampir for traces

See also this page for a more detailed Score-P based workflow for profiling and tracing.

Usage

Compiling with scorep

First load the needed software module:

# on HAWK

module load scorep

# on Vulcan

module load performance/scorep

Now you can compile your application using the scorep compiler wrappers in place of the original C, C++, and Fortran compilers:

scorep-mpif90

scorep-mpicc

scorep-mpicxx


Generating the trace/profile files

Run your application with the instrumented bianry. This will generate the needed trace and profile files.

export SCOREP_ENABLE_TRACING=false # enable to generate otf2 tracefiles for vampir, best check overhead before with PROFILING

export SCOREP_ENABLE_PROFILING=true # enable to generate cubex profile for CUBE
# export SCOREP_FILTERING_FILE=<filter file> # specify filter file to reduce overheads if necessary
export MPI_SHEPHERD=true # needed for MPT on HAWK

mpirun <mpi option> <app> <app agruments>


PAPI counter information

To include PAPI counter information into your analysis, set the following variable to the desired PAPI counter names:

export SCOREP_METRIC_PAPI=PAPI_TOT_INS,PAPI_FP_INS


Hints

In case there are problems with the post-processing of traces, we suggest to try to add the following options the post-processing tool in order to produce a 'scout.cubex' output

export SCAN_ANALYZE_OPTS="--no-time-correct --single-pass"
scan -t -s mpirun <mpi option> <app> <app agruments>

If the '.ortf2' trace file already exists one can also manually call the post-processing tool:

mpirun -n <#ranks> scout.mpi --no-time-correct --single-pass <path_to_tracefile>

There also exists a `scout.ser`, `scout.omp` and `scout.hyb` for serial, OpenMP and hybrid jobs respectively.

See also

External Links