- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

Vampir

From HLRS Platforms
Jump to navigationJump to search
The Vampir suite of tools offers scalable event analysis through a nice GUI which enables a fast and interactive rendering of very complex performance data. The suite consists of Vampirtrace, Vampir and Vampirserver. Ultra large data volumes can be analyzed with a parallel version of Vampirserver, loading and analysing the data on the compute nodes with the GUI of Vampir attaching to it.

Vampir is based on standard QT and works on desktop Unix workstations as well as on parallel production systems. The program is available for nearly all platforms like Linux-based PCs and Clusters, IBM, SGI, SUN. NEC, HP, and Apple.

Vampir-logo.gif
Developer: GWT-TUD GmbH
Platforms:
Category: Performance Analyzer
License: Commercial
Website: Vampir homepage


Usage

In order to use Vampir, You first need to generate a trace of Your application, preferrably using VampirTrace. This trace consists of a file for each MPI process and an OTF-file (Open Trace Format) describing the other files. Using environment variables (starting with VT_):

module load performance/vampirtrace


Please note, that being an MPI-tool, VampirTrace is dependent not only on the compiler being used, but also on the MPI-version.

You may recompile, specifying the amount of instrumentation You want to include, e.g. -vt:mpi for MPI instrumentation, -vt:compinst for compiler-based function instrumentation:

vtcc -vt:mpi -vt:inst compinst -o myapp myapp.c


To analyse Your application, for small traces (<16 processes, only few GB of trace data), use vampir standalone

module load performance/vampir vampir


For large-scale traces (up to many thousand MPI processes), use the parallel VampirServer (on compute nodes allocated through the queuing system), and attach to it using vampir:

>qsub -I -lnodes=16:nehalem:ppn=8,walltime=1:0:0 qsub: waiting for job 297851.intern2 to start qsub: job 297851.intern2 ready ... module load performance/vampirserver mpirun -np 256 vampirserver-core VampirServer 7.5.0 Licensed to HLRS Running 255 analysis processes. Server listens on: n010802:30000

then on the head-node use the Vampir-GUI to "Remote open" the same file by attaching to the heavy-weight, compute nodes:

Example of remote open on Nehalem-Cluster

Please note, that vampirserver-core is memory-bound and may work best if started with only one MPI process per node, or one per socket, e.g. use Open MPI's option to mpirun -bysocket.

Vampirserver

Vampir-Server is the parallel version of the graphical trace file analyzing software. The tool consists of a client and a server. The server itself is a parallel MPI Program.

First load the necessary software module

module load performance/vampirserver


Now start the server and afterwards the client

on laki:

mpirun -np 4 vampirserver-core &

on hermit:

aprun -n 4 vampirserver-core &


Remember the port number which will be displays as you will need it later on. Then start the graphical client

vng


The last step is to connect the client to the server using the port number offered you from the server. This is done under

file <math>\rightarrow</math> Connect to Server ...


Then you can open your traces via

file <math>\rightarrow</math> Open Tracefile  ...

Select the desired *.otf file


Hermit special

Request an interactive session

qsub -IX -lmppwidth=64 -lmppnppn=32 -lwalltime=2:00:00


Load the vampirserver module

module load performance/vampirserver


Start the vampirserver on the compute nodes using

vampirserver start -n 63 aprun

This will show you a connection host and port:

Launching VampirServer...
VampirServer 7.6.2 (r7777)
Licensed to HLRS
Running 31 analysis processes... (abort with vampirserver stop 10)
VampirServer <10> listens on: nid03447:30000
Warning: The number of analysis processes must not exceed the number of processes requested with -lmppwidth minus one!


Now forward this to one of the login nodes:

ssh -N -R 30000:nid03447:30000 eslogin003


Open a new terminal and login to the login node selected in the command before:

ssh -Y eslogin003.hww.de


Now you can proceed as described in section Vampir

See also

External links