- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

Score-P: Difference between revisions

From HLRS Platforms
Jump to navigationJump to search
No edit summary
Line 28: Line 28:
module load performance/vampirtrace
module load performance/vampirtrace
}}
}}
Now you can compile your application using the prefixing all compiler and linker commnds with ''scorep'':
Now you can compile your application using the scorep compiler wrappers in place of the original C, C++, and Fortran compilers:
{{command|command=
{{command|command=
scorep mpif90
scorep-mpif90
scorep mpicc
scorep-mpicc
scorep mpicxx
scorep-mpicxx
}}
}}



Revision as of 12:42, 21 April 2021

The Score-P instrumentation infrastructure allows tracing and sampling of MPI and Open MP parallel applications. Among others, it is used to generate traces in the otf2 format for the Tracec viewer Vampir and profiling records in the cubex format for the CUBE visualizer.
Developer: ZIH TU Dresden and JSC/FZ Juelich
Platforms: Vulcan, HPE_Hawk
Category: Performance Analyzer
License: BSD License
Website: Score-P homepage


Introduction

Analyzing an application with Score-P is done in multiple steps:

  1. Compiling the application with the scorep wrappercompiler
  2. Running the instrumented application
  3. Analyzing the performance records with CUBE for profiles or with Vampir for traces

Usage

Compiling with scorep

First load the needed software module:

# on HAWK

module load scorep

# on Vulcan

module load performance/vampirtrace

Now you can compile your application using the scorep compiler wrappers in place of the original C, C++, and Fortran compilers:

scorep-mpif90

scorep-mpicc

scorep-mpicxx


Generating the trace/profile files

Run your application with the instrumented bianry. This will generate the needed trace and profile files.

export SCOREP_ENABLE_TRACING=false # enable to generate otf2 tracefiles for vampir, best check overhead before with PROFILING

export SCOREP_ENABLE_PROFILING=true # enable to generate cubex profile for CUBE
# export SCOREP_FILTERING_FILE=<filter file> # specify filter file to reduce overheads if necessary
export MPI_SHEPHERD=true # needed for MPT on HAWK

mpirun <mpi option> <app> <app agruments>


PAPI counter information

To include PAPI counter information into your analysis, set the following variable to the desired PAPI counter names:

export SCOREP_METRIC_PAPI=PAPI_TOT_INS,PAPI_FP_INS


See also

External Links