- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -
Scalasca: Difference between revisions
m (Buglet) |
(Add link to Score-P) |
||
(5 intermediate revisions by the same user not shown) | |||
Line 11: | Line 11: | ||
== Usage == | == Usage == | ||
To use Scalasca | To use Scalasca start by loading the respective software modules: | ||
{{Command|command = | |||
<nowiki /># on HAWK: | |||
module load scalasca scorep cube | |||
<nowiki /># on Vulcan | |||
module load performance/scalasca | module load performance/scalasca performance/scorep | ||
}} | }} | ||
Then, instrument your application using [[Score-P]] by prefixing all MPI compiler and linker commands with the ''scorep'' command: | |||
{{Command|command = | {{Command|command = | ||
scorep mpicc myapp.c -o myapp | |||
}} | }} | ||
Next analyze your application using the ''scalasca -analyze'' command with your usual ''mpirun'' command: | |||
{{Command|command= | {{Command|command= | ||
export MPI_SHEPHERD=true # this is mandatory when running with mpt on HAWK<br> | |||
scalasca -analyze mpirun -n ${NP} myapp | |||
}} | }} | ||
The last step is to explore the generated reports by | After the run there will be a folder prefixed with "scorep_*" containing the messurment results. | ||
{{Warning|text=The scalasca -analyze command evaluates the provided MPI command line and will try to determine the used MPI implementation, number of MPI ranks (based on '-n' option), additional MPI options, the application name, and the application's arguments. If you encounter problems at this point, please consult the scalasca documentation for details.}} | |||
The last step is to explore the generated reports by using the ''scalasca -examine'' command, with the previously created "scorep_*" folder as the experiment name: | |||
{{Command|command= | {{Command|command= | ||
scalasca -examine <experiment_name> | |||
}} | |||
This will open Cube with the generated report automatically if you have enabled X forwarding. | |||
You can also inspect the generated ''*.cubex'' report files, manually starting the Cube GUI: | |||
{{Command|command= | |||
cube <file>.cubex | |||
}} | }} | ||
=== Usage on Hawk === | |||
{{ | {{Warning|text= | ||
It is necessary to set {{command|command=export MPI_SHEPHERD=true}} for the ''scalasca -analyze'' command. | |||
}} | }} | ||
Latest revision as of 13:27, 29 April 2020
Scalasca is an open-source toolset for the performance analyzes of parallel applications helping to identify optimization opportunities. It has been specifically designed for the use on large-scale systems including Cray XT/XE, but is also well-suited for small- and medium-scale HPC platforms. Scalasca supports an incremental performance-analysis procedure that integrates runtime summaries with in-depth studies of concurrent behavior via event tracing, adopting a strategy of successively refined measurement configurations. A distinctive feature is the ability to identify wait states that occur, for example, as a result of unevenly distributed workloads. |
|
Usage
To use Scalasca start by loading the respective software modules:
module load scalasca scorep cube
# on Vulcan
module load performance/scalasca performance/scorep
Then, instrument your application using Score-P by prefixing all MPI compiler and linker commands with the scorep command:
Next analyze your application using the scalasca -analyze command with your usual mpirun command:
scalasca -analyze mpirun -n ${NP} myapp
After the run there will be a folder prefixed with "scorep_*" containing the messurment results.
The last step is to explore the generated reports by using the scalasca -examine command, with the previously created "scorep_*" folder as the experiment name:
This will open Cube with the generated report automatically if you have enabled X forwarding. You can also inspect the generated *.cubex report files, manually starting the Cube GUI:
Usage on Hawk