- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

Score-P: Difference between revisions

From HLRS Platforms
Jump to navigationJump to search
(add hints for scout options)
(Update documentation for Hunter usage)
Line 23: Line 23:
{{Command  
{{Command  
| command =  
| command =  
<nowiki /># on HAWK
<nowiki /># on Hunter and HAWK
module load scorep
module load scorep


Line 31: Line 31:
Now you can compile your application using the scorep compiler wrappers in place of the original C, C++, and Fortran compilers:
Now you can compile your application using the scorep compiler wrappers in place of the original C, C++, and Fortran compilers:
{{command|command=
{{command|command=
# on Hunter:
scorep-ftn
scorep-cc
scorep-CC
# on Vulcan and HAWK:
scorep-mpif90
scorep-mpif90
scorep-mpicc
scorep-mpicc

Revision as of 13:44, 18 February 2025

The Score-P instrumentation infrastructure allows tracing and sampling of MPI and Open MP parallel applications. Among others, it is used to generate traces in the otf2 format for the Tracec viewer Vampir and profiling records in the cubex format for the CUBE visualizer.
Developer: ZIH TU Dresden and JSC/FZ Juelich
Platforms: Vulcan, HPE_Hawk
Category: Performance Analyzer
License: BSD License
Website: Score-P homepage


Introduction

Analyzing an application with Score-P is done in multiple steps:

  1. Compiling the application with the scorep wrappercompiler
  2. Running the instrumented application
  3. Analyzing the performance records with CUBE for profiles or with Vampir for traces

See also this page for a more detailed Score-P based workflow for profiling and tracing.

Usage

Compiling with scorep

First load the needed software module:

# on Hunter and HAWK
module load scorep

# on Vulcan 
module load performance/scorep

Now you can compile your application using the scorep compiler wrappers in place of the original C, C++, and Fortran compilers:

# on Hunter:
scorep-ftn
scorep-cc
scorep-CC
# on Vulcan and HAWK:
scorep-mpif90
scorep-mpicc
scorep-mpicxx


Generating the trace/profile files

Run your application with the instrumented bianry. This will generate the needed trace and profile files.

export SCOREP_ENABLE_TRACING=false # enable to generate otf2 tracefiles for vampir, best check overhead before with PROFILING<br />
export SCOREP_ENABLE_PROFILING=true # enable to generate cubex profile for CUBE<br />
# export SCOREP_FILTERING_FILE=<filter file> # specify filter file to reduce overheads if necessary<br />
export MPI_SHEPHERD=true # needed for MPT on HAWK<br />
mpirun <mpi option> <app> <app agruments>


PAPI counter information

To include PAPI counter information into your analysis, set the following variable to the desired PAPI counter names:

export SCOREP_METRIC_PAPI=PAPI_TOT_INS,PAPI_FP_INS


Hints

In case there are problems with the post-processing of traces, we suggest to try to add the following options the post-processing tool in order to produce a 'scout.cubex' output

export SCAN_ANALYZE_OPTS="--no-time-correct --single-pass" <br />
scan -t -s mpirun <mpi option> <app> <app agruments>

If the '.ortf2' trace file already exists one can also manually call the post-processing tool:

mpirun -n <#ranks> scout.mpi --no-time-correct --single-pass <path_to_tracefile>

There also exists a `scout.ser`, `scout.omp` and `scout.hyb` for serial, OpenMP and hybrid jobs respectively.

See also

External Links