- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

DDT: Difference between revisions

From HLRS Platforms
Jump to navigationJump to search
 
(11 intermediate revisions by 2 users not shown)
Line 12: Line 12:
DDT is available through modules
DDT is available through modules
{{Command|command =
{{Command|command =
module load debugger/ddt
module load forge
}}
}}


Line 18: Line 18:
Do not forget to compile your application with debugging info (<tt>-g</tt> option)
Do not forget to compile your application with debugging info (<tt>-g</tt> option)
}}
}}


== Examples ==
== Examples ==
=== starting the application from inside DDT ===
==== Usage on HAWK ====


Set up the environment
To debug a program with ddt on HAWK you have to use the ''Reverse Connection'' feature.
{{Command| command =
Therefore first launch ddt on a login node:
module load debugger/ddt
{{Command | command =
module load mpi/openmpi
module load forge<br>
ddt
}}
}}


Compile your application
Then, you have to execute the program you want to debug in a separate shell either via a job script or an interactive job as follows:
{{Command| command =  
Load the forge module in your job script or interactive job and modify your ''mpirun'' command line therein
mpicc -g your_app.c -o your_app
{{Command | command=module load forge<br>
ddt --connect mpirun ...
}}
}}


Start DDT:
After some time a connection request window will pop up in the ddt GUI. Accept the request and you will get the ddt run window to start debugging.
{{Command| command =
ddt your_app
}}


Select the right MPI Implementation in the Options and run your program.
For more information visit the [https://developer.arm.com/docs/101136/latest/arm-forge/connecting-to-a-remote-system ARM documentation].


==== Nec Nehalem Cluster special ====
=== starting the application from inside DDT ===


If you want to debug a parallel Program using Open MPI with DDT on the NEC Nehalem Cluster select 'OpenMPI (Compatibility)' as the desired MPI Implementation.
Get an interactive job (with X forwarding) and set up the environment within
 
 
==== Cray XT5m special ====
 
If your program does IO take care to start an interactive session from the correct directory.
 
==== Hermit special ====
 
Currently the usage of DDT on Hermit is only supported within an interactive job with X-forwarding enabled.
{{Command| command =
{{Command| command =
qsub -I -X [other job options]
module load forge<br>
module load openmpi
}}
}}


To start your program under the control of DDT first load the ddt module
Start DDT:
{{Command | command =  
{{Command| command =
module load ddt
ddt your_app
ddt your_app
}}
}}


{{Note| text =
Select the right MPI implementation in the options and run your program.
Ensure that you select "Cray XT/XE/XK (MPI/shmem/UPC/CAF), no queue" as your MPI implementation.  This will make sure that your_app is not submitted via the queuing system, but executed directly by aprun.
}}
 
<!-- === attaching to an already running application === -->


== See also ==
== See also ==

Latest revision as of 14:52, 25 April 2023

Allinea DDT helps developers fix bugs quickly - from the desktop to the largest supercomputer. The most scalable parallel debugger for debugging MPI and multi-threaded codes, DDT leads the world in performance and usability.
Developer: Allinea
Platforms: NEC Nehalem Cluster
Category: Debugger
License: Commercial
Website: Allinea homepage


Usage

DDT is available through modules

module load forge


Note: Do not forget to compile your application with debugging info (-g option)


Examples

Usage on HAWK

To debug a program with ddt on HAWK you have to use the Reverse Connection feature. Therefore first launch ddt on a login node:

module load forge
ddt


Then, you have to execute the program you want to debug in a separate shell either via a job script or an interactive job as follows: Load the forge module in your job script or interactive job and modify your mpirun command line therein

module load forge
ddt --connect mpirun ...


After some time a connection request window will pop up in the ddt GUI. Accept the request and you will get the ddt run window to start debugging.

For more information visit the ARM documentation.

starting the application from inside DDT

Get an interactive job (with X forwarding) and set up the environment within

module load forge
module load openmpi


Start DDT:

ddt your_app


Select the right MPI implementation in the options and run your program.

See also

External links