- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

Score-P: Difference between revisions

From HLRS Platforms
Jump to navigationJump to search
(Usage of Score-P on our systems)
 
(add hints for scout options)
 
(7 intermediate revisions by 3 users not shown)
Line 14: Line 14:
# Running the instrumented application  
# Running the instrumented application  
# Analyzing the performance records with CUBE for profiles or with Vampir for traces
# Analyzing the performance records with CUBE for profiles or with Vampir for traces
See also [[Workflow for Profiling and Tracing with Score-P and Scalasca|this page]] for a more detailed Score-P based workflow for profiling and tracing.


== Usage ==
== Usage ==
=== Compiling with scorep ===
=== Compiling with scorep ===


Line 24: Line 25:
<nowiki /># on HAWK
<nowiki /># on HAWK
module load scorep
module load scorep
<nowiki /># on Vulcan  
<nowiki /># on Vulcan  
module load performance/vampirtrace
module load performance/scorep
}}
}}
Now you can compile your application using the prefixing all compiler and linker commnds with ''scorep'':
Now you can compile your application using the scorep compiler wrappers in place of the original C, C++, and Fortran compilers:
{{command|command=
{{command|command=
scorep mpif90
scorep-mpif90
scorep mpicc
scorep-mpicc
scorep mpicxx
scorep-mpicxx
}}
}}


Line 41: Line 43:
export SCOREP_ENABLE_PROFILING=true # enable to generate cubex profile for CUBE<br />
export SCOREP_ENABLE_PROFILING=true # enable to generate cubex profile for CUBE<br />
<nowiki /># export SCOREP_FILTERING_FILE=<filter file> # specify filter file to reduce overheads if necessary<br />
<nowiki /># export SCOREP_FILTERING_FILE=<filter file> # specify filter file to reduce overheads if necessary<br />
export MPI_SHEPHERD=true # needed for mpt on HAWK<br />
export MPI_SHEPHERD=true # needed for MPT on HAWK<br />
mpirun <mpi option> <app> <app agruments>
mpirun <mpi option> <app> <app agruments>
}}
}}


=== PAPI counter information ===
To include [[PAPI]] counter information into your analysis, set the following variable to the desired PAPI counter names:
{{Command|command=
export SCOREP_METRIC_PAPI=PAPI_TOT_INS,PAPI_FP_INS
}}
=== Hints ===
In case there are problems with the post-processing of traces, we suggest to try to add the following options the post-processing tool in order to produce a 'scout.cubex' output
{{Command|command=
export SCAN_ANALYZE_OPTS="--no-time-correct --single-pass" <br />
scan -t -s mpirun <mpi option> <app> <app agruments>
}} 
If the '.ortf2' trace file already exists one can also manually call the post-processing tool:
{{Command|command=
mpirun -n <#ranks> scout.mpi --no-time-correct --single-pass <path_to_tracefile>
}}
There also exists a `scout.ser`, `scout.omp` and `scout.hyb` for serial, OpenMP and hybrid jobs respectively.


== See also ==
== See also ==
Line 51: Line 71:
== External Links ==
== External Links ==
* [https://www.vi-hps.org/projects/score-p/ Score-P Homepage]
* [https://www.vi-hps.org/projects/score-p/ Score-P Homepage]
[[Category:Performance Analyzer]]

Latest revision as of 10:48, 5 July 2023

The Score-P instrumentation infrastructure allows tracing and sampling of MPI and Open MP parallel applications. Among others, it is used to generate traces in the otf2 format for the Tracec viewer Vampir and profiling records in the cubex format for the CUBE visualizer.
Developer: ZIH TU Dresden and JSC/FZ Juelich
Platforms: Vulcan, HPE_Hawk
Category: Performance Analyzer
License: BSD License
Website: Score-P homepage


Introduction

Analyzing an application with Score-P is done in multiple steps:

  1. Compiling the application with the scorep wrappercompiler
  2. Running the instrumented application
  3. Analyzing the performance records with CUBE for profiles or with Vampir for traces

See also this page for a more detailed Score-P based workflow for profiling and tracing.

Usage

Compiling with scorep

First load the needed software module:

# on HAWK

module load scorep

# on Vulcan

module load performance/scorep

Now you can compile your application using the scorep compiler wrappers in place of the original C, C++, and Fortran compilers:

scorep-mpif90

scorep-mpicc

scorep-mpicxx


Generating the trace/profile files

Run your application with the instrumented bianry. This will generate the needed trace and profile files.

export SCOREP_ENABLE_TRACING=false # enable to generate otf2 tracefiles for vampir, best check overhead before with PROFILING

export SCOREP_ENABLE_PROFILING=true # enable to generate cubex profile for CUBE
# export SCOREP_FILTERING_FILE=<filter file> # specify filter file to reduce overheads if necessary
export MPI_SHEPHERD=true # needed for MPT on HAWK

mpirun <mpi option> <app> <app agruments>


PAPI counter information

To include PAPI counter information into your analysis, set the following variable to the desired PAPI counter names:

export SCOREP_METRIC_PAPI=PAPI_TOT_INS,PAPI_FP_INS


Hints

In case there are problems with the post-processing of traces, we suggest to try to add the following options the post-processing tool in order to produce a 'scout.cubex' output

export SCAN_ANALYZE_OPTS="--no-time-correct --single-pass"
scan -t -s mpirun <mpi option> <app> <app agruments>

If the '.ortf2' trace file already exists one can also manually call the post-processing tool:

mpirun -n <#ranks> scout.mpi --no-time-correct --single-pass <path_to_tracefile>

There also exists a `scout.ser`, `scout.omp` and `scout.hyb` for serial, OpenMP and hybrid jobs respectively.

See also

External Links