- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -
Score-P: Difference between revisions
(→Usage) |
(Add some information and link for scorep usage with cmake and autotools) |
||
(7 intermediate revisions by 4 users not shown) | |||
Line 14: | Line 14: | ||
# Running the instrumented application | # Running the instrumented application | ||
# Analyzing the performance records with CUBE for profiles or with Vampir for traces | # Analyzing the performance records with CUBE for profiles or with Vampir for traces | ||
See also [[Workflow for Profiling and Tracing with Score-P and Scalasca|this page]] for a more detailed Score-P based workflow for profiling and tracing. | |||
== Usage == | == Usage == | ||
=== Compiling with scorep === | === Compiling with scorep === | ||
Line 22: | Line 23: | ||
{{Command | {{Command | ||
| command = | | command = | ||
<nowiki /># on HAWK | <nowiki /># on Hunter and HAWK | ||
module load scorep | module load scorep | ||
<nowiki /># on Vulcan | <nowiki /># on Vulcan | ||
module load performance/ | module load performance/scorep | ||
}} | }} | ||
Now you can compile your application using the | Now you can compile your application using the scorep compiler wrappers in place of the original C, C++, and Fortran compilers: | ||
{{command|command= | {{command|command= | ||
scorep mpif90 | # on Hunter: | ||
scorep mpicc | scorep-ftn | ||
scorep mpicxx | scorep-cc | ||
scorep-CC | |||
# on Vulcan and HAWK: | |||
scorep-mpif90 | |||
scorep-mpicc | |||
scorep-mpicxx | |||
}} | |||
If you are using a build system like CMake or Autotools for your software project you have to take care of the scorep-wrapper sepcifics. Make sure to have a look at the [https://scorepci.pages.jsc.fz-juelich.de/scorep-pipelines/docs/scorep-4.1/html/scorepwrapper.html score-wrapper documentation]. | |||
As a starting point for CMake you can use the following: | |||
{{command|command=SCOREP_WRAPPER=off cmake .. \ | |||
-DCMAKE_C_COMPILER=scorep-cc \ | |||
-DCMAKE_CXX_COMPILER=scorep-CC | |||
}} | }} | ||
Line 42: | Line 57: | ||
export SCOREP_ENABLE_PROFILING=true # enable to generate cubex profile for CUBE<br /> | export SCOREP_ENABLE_PROFILING=true # enable to generate cubex profile for CUBE<br /> | ||
<nowiki /># export SCOREP_FILTERING_FILE=<filter file> # specify filter file to reduce overheads if necessary<br /> | <nowiki /># export SCOREP_FILTERING_FILE=<filter file> # specify filter file to reduce overheads if necessary<br /> | ||
export MPI_SHEPHERD=true # needed for | export MPI_SHEPHERD=true # needed for MPT on HAWK<br /> | ||
mpirun <mpi option> <app> <app agruments> | mpirun <mpi option> <app> <app agruments> | ||
}} | }} | ||
=== PAPI counter information === | === PAPI counter information === | ||
Line 53: | Line 67: | ||
export SCOREP_METRIC_PAPI=PAPI_TOT_INS,PAPI_FP_INS | export SCOREP_METRIC_PAPI=PAPI_TOT_INS,PAPI_FP_INS | ||
}} | }} | ||
=== Hints === | |||
In case there are problems with the post-processing of traces, we suggest to try to add the following options the post-processing tool in order to produce a 'scout.cubex' output | |||
{{Command|command= | |||
export SCAN_ANALYZE_OPTS="--no-time-correct --single-pass" <br /> | |||
scan -t -s mpirun <mpi option> <app> <app agruments> | |||
}} | |||
If the '.ortf2' trace file already exists one can also manually call the post-processing tool: | |||
{{Command|command= | |||
mpirun -n <#ranks> scout.mpi --no-time-correct --single-pass <path_to_tracefile> | |||
}} | |||
There also exists a `scout.ser`, `scout.omp` and `scout.hyb` for serial, OpenMP and hybrid jobs respectively. | |||
== See also == | == See also == | ||
Line 59: | Line 85: | ||
== External Links == | == External Links == | ||
* [https://www.vi-hps.org/projects/score-p/ Score-P Homepage] | * [https://www.vi-hps.org/projects/score-p/ Score-P Homepage] | ||
[[Category:Performance Analyzer]] |
Latest revision as of 13:59, 18 February 2025
The Score-P instrumentation infrastructure allows tracing and sampling of MPI and Open MP parallel applications. Among others, it is used to generate traces in the otf2 format for the Tracec viewer Vampir and profiling records in the cubex format for the CUBE visualizer. |
|
Introduction
Analyzing an application with Score-P is done in multiple steps:
- Compiling the application with the scorep wrappercompiler
- Running the instrumented application
- Analyzing the performance records with CUBE for profiles or with Vampir for traces
See also this page for a more detailed Score-P based workflow for profiling and tracing.
Usage
Compiling with scorep
First load the needed software module:
# on Hunter and HAWK
module load scorep
# on Vulcan
module load performance/scorep
Now you can compile your application using the scorep compiler wrappers in place of the original C, C++, and Fortran compilers:
# on Hunter:
scorep-ftn
scorep-cc
scorep-CC
# on Vulcan and HAWK:
scorep-mpif90
scorep-mpicc
scorep-mpicxx
If you are using a build system like CMake or Autotools for your software project you have to take care of the scorep-wrapper sepcifics. Make sure to have a look at the score-wrapper documentation.
As a starting point for CMake you can use the following:
SCOREP_WRAPPER=off cmake .. \
-DCMAKE_C_COMPILER=scorep-cc \
-DCMAKE_CXX_COMPILER=scorep-CC
Generating the trace/profile files
Run your application with the instrumented bianry. This will generate the needed trace and profile files.
export SCOREP_ENABLE_TRACING=false # enable to generate otf2 tracefiles for vampir, best check overhead before with PROFILING<br />
export SCOREP_ENABLE_PROFILING=true # enable to generate cubex profile for CUBE<br />
# export SCOREP_FILTERING_FILE=<filter file> # specify filter file to reduce overheads if necessary<br />
export MPI_SHEPHERD=true # needed for MPT on HAWK<br />
mpirun <mpi option> <app> <app agruments>
PAPI counter information
To include PAPI counter information into your analysis, set the following variable to the desired PAPI counter names:
export SCOREP_METRIC_PAPI=PAPI_TOT_INS,PAPI_FP_INS
Hints
In case there are problems with the post-processing of traces, we suggest to try to add the following options the post-processing tool in order to produce a 'scout.cubex' output
export SCAN_ANALYZE_OPTS="--no-time-correct --single-pass" <br />
scan -t -s mpirun <mpi option> <app> <app agruments>
If the '.ortf2' trace file already exists one can also manually call the post-processing tool:
mpirun -n <#ranks> scout.mpi --no-time-correct --single-pass <path_to_tracefile>
There also exists a `scout.ser`, `scout.omp` and `scout.hyb` for serial, OpenMP and hybrid jobs respectively.