- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

Vampir: Difference between revisions

From HLRS Platforms
Jump to navigationJump to search
m (Adapt hint about handle-able trace file size for vampir)
 
(10 intermediate revisions by 2 users not shown)
Line 13: Line 13:
== Usage ==
== Usage ==


In order to use Vampir, You first need to generate a trace of Your application, preferrably using VampirTrace.
Vampir consists of a GUI interface and a analysis backend. In order to use Vampir, You first need to generate a trace of Your application, preferably using [[Vampirtrace | VampirTrace]].
This trace consists of a file for each MPI process and an OTF-file (Open Trace Format) describing the other files.
The Open Trace Format (OTF) trace consists of a file for each MPI process (<tt>*.events.z</tt>) a trace definition file (<tt>*.def.z</tt>) and the master trace file (<tt>*.otf</tt>) describing the other files. Fore details how to generate OTF traces see [[Vampirtrace]].
Using environment variables (starting with <tt>VT_</tt>):


{{Command|command =
=== Vampir ===
module load performance/vampirtrace
To analyze small traces (< 10 GB of trace data), you can use Vampir standalone with the default backend:
}}
 
Please note, that being an MPI-tool, VampirTrace is dependent not only on the compiler being used, but also on the MPI-version.
 
You may recompile, specifying the amount of instrumentation You want to include, e.g. <tt>-vt:mpi</tt> for MPI instrumentation,
<tt>-vt:compinst</tt> for compiler-based function instrumentation:
{{Command|command =  
vtcc -vt:mpi -vt:inst compinst -o myapp myapp.c
}}
 
To analyse Your application, for small traces (<16 processes, only few GB of trace data), use vampir standalone
{{Command|command=
{{Command|command=
module load performance/vampir
module load performance/vampir<br>
vampir
vampir
}}
}}


For large-scale traces (up to many thousand MPI processes), use the parallel VampirServer (on compute nodes allocated through the queuing system), and attach to it using vampir:
=== VampirServer ===
{{Command|command=<nowiki>>qsub -I -lnodes=16:nehalem:ppn=8,walltime=1:0:0
qsub: waiting for job 297851.intern2 to start
qsub: job 297851.intern2 ready
...
module load performance/vampirserver
mpirun -np 256 vampirserver-core


VampirServer 7.5.0
For large-scale traces (> 10GB and up to many thousand MPI processes), use the parallel VampirServer backend (on compute nodes allocated through the queuing system), and attach to it using vampir.
Licensed to HLRS
Most likely you will select the number of nodes based on the trace file size and the memory available on a single node. However, you may need more than two times the memory to hold the trace.
Running 255 analysis processes.
Server listens on: n010802:30000</nowiki>
}}
then on the head-node use the Vampir-GUI to "Remote open" the same file by attaching to the heavy-weight, compute nodes:
 
[[Image:vampir_remote_open.png|Example of remote open on Nehalem-Cluster]]


Please note, that <tt>vampirserver-core</tt> is memory-bound and may work best if started with only one MPI process per node, or one per socket, e.g. use Open MPI's option to mpirun <tt>-bysocket</tt>.
To start the server get an interactive node first. To run the vampirserver for 1 hour on 4 nodes of HAWK with 512 processes:
== Vampirserver ==
Vampir-Server is the parallel version of the graphical trace file analyzing software. The tool consists of a client and a server. The server itself is a parallel MPI Program.


First load the necessary software module
{{Command|command=qsub -I -lselect=4:mpiprocs=128,walltime=1:0:0
 
module load vampirserver  # on vulcan use module load performance/vampirserver instead
{{Command
vampirserver start -n $((512 - 1))
| command = module load performance/vampirserver
}}
 
Now start the server and afterwards the client
 
on laki:
{{Command
| command = mpirun -np 4 vampirserver-core &
}}
on hermit:
{{Command
| command = aprun -n 4 vampirserver-core &
}}
}}
 
{{Warning|text=The number of analysis processes must not exceed the number of processes requested minus one!}}
 
This will show you a connection host and port:  
Remember the port number which will be displays as you will need it later on. Then start the graphical client
<pre>
 
VampirServer 9.8.0
{{Command
| command = vng
}}
 
The last step is to connect the client to the server using the port number offered you from the server. This is done under
 
file <math>\rightarrow</math> Connect to Server ...
 
 
Then you can open your traces via
 
file <math>\rightarrow</math> Open Tracefile  ...
 
Select the desired ''*.otf'' file
 
 
=== Hermit special ===
 
Request an interactive session
{{Command|command=qsub -IX -lmppwidth=64 -lmppnppn=32 -lwalltime=2:00:00}}
 
Load the vampirserver module
{{Command|command=module load performance/vampirserver}}
 
Start the vampirserver on the compute nodes using
{{Command|command=vampirserver start -n 63 aprun}}
This will show you a connection host and port:
<pre>Launching VampirServer...
VampirServer 7.6.2 (r7777)
Licensed to HLRS
Licensed to HLRS
Running 31 analysis processes... (abort with vampirserver stop 10)
Running 511 analysis processes. (abort with vampirserver stop 66)
VampirServer <10> listens on: nid03447:30000
Server listens on: r15c1t7n1:30000
</pre>
</pre>
{{Warning|text=The number of analysis processes must not exceed the number of processes requested with ''-lmppwidth'' minus one!}}


Now forward this to one of the login nodes:
From this output note down the server name and port as well as the command to stop the vampirserver.
{{Command|command=ssh -N -R 30000:nid03447:30000 eslogin003}}


Now open a new shell, and login to one of the login nodes of the system via ssh (don't forget X-forwarding) and open vampir.
{{Command|command=module load vampir # on vulcan module load performance/vampir
vampir
}}
Select open other and then chose "Remote File". In the opening window enter the server name and port displayed by VampirServer before. Proceed and select the trace you want to open.


Open a new terminal and login to the login node selected in the command before:
[[Image:vampir_remote_open.png|Example of remote open on Nehalem-Cluster]]
{{Command|command=ssh -Y eslogin003.hww.de}}
 
Now you can proceed as described in section [[Vampir]]


== See also ==
== See also ==

Latest revision as of 13:53, 30 April 2020

The Vampir suite of tools offers scalable event analysis through a nice GUI which enables a fast and interactive rendering of very complex performance data. The suite consists of Vampirtrace, Vampir and Vampirserver. Ultra large data volumes can be analyzed with a parallel version of Vampirserver, loading and analysing the data on the compute nodes with the GUI of Vampir attaching to it.

Vampir is based on standard QT and works on desktop Unix workstations as well as on parallel production systems. The program is available for nearly all platforms like Linux-based PCs and Clusters, IBM, SGI, SUN. NEC, HP, and Apple.

Vampir-logo.gif
Developer: GWT-TUD GmbH
Platforms:
Category: Performance Analyzer
License: Commercial
Website: Vampir homepage


Usage

Vampir consists of a GUI interface and a analysis backend. In order to use Vampir, You first need to generate a trace of Your application, preferably using VampirTrace. The Open Trace Format (OTF) trace consists of a file for each MPI process (*.events.z) a trace definition file (*.def.z) and the master trace file (*.otf) describing the other files. Fore details how to generate OTF traces see Vampirtrace.

Vampir

To analyze small traces (< 10 GB of trace data), you can use Vampir standalone with the default backend:

module load performance/vampir
vampir


VampirServer

For large-scale traces (> 10GB and up to many thousand MPI processes), use the parallel VampirServer backend (on compute nodes allocated through the queuing system), and attach to it using vampir. Most likely you will select the number of nodes based on the trace file size and the memory available on a single node. However, you may need more than two times the memory to hold the trace.

To start the server get an interactive node first. To run the vampirserver for 1 hour on 4 nodes of HAWK with 512 processes:

qsub -I -lselect=4:mpiprocs=128,walltime=1:0:0

module load vampirserver # on vulcan use module load performance/vampirserver instead

vampirserver start -n $((512 - 1))
Warning: The number of analysis processes must not exceed the number of processes requested minus one!

This will show you a connection host and port:

VampirServer 9.8.0
Licensed to HLRS
Running 511 analysis processes. (abort with vampirserver stop 66)
Server listens on: r15c1t7n1:30000

From this output note down the server name and port as well as the command to stop the vampirserver.

Now open a new shell, and login to one of the login nodes of the system via ssh (don't forget X-forwarding) and open vampir.

module load vampir # on vulcan module load performance/vampir vampir

Select open other and then chose "Remote File". In the opening window enter the server name and port displayed by VampirServer before. Proceed and select the trace you want to open.

Example of remote open on Nehalem-Cluster

See also

External links