- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

Hawk PrePostProcessing: Difference between revisions

From HLRS Platforms
Jump to navigationJump to search
No edit summary
Line 2: Line 2:
For pre- and post-processing purposes with large memory requirements multi user smp nodes are available. These nodes are reachable via the smp queue. Please see the [[Batch_System_PBSPro_(Hawk)#Node_types|node types]], [[Batch_System_PBSPro_(Hawk)#Examples_for_starting_batch_jobs:|examples]] and [[Batch_System_PBSPro_(Hawk)#smp|smp]] sections of the [[Batch_System_PBSPro_(Hawk)|batch system documentation]].
For pre- and post-processing purposes with large memory requirements multi user smp nodes are available. These nodes are reachable via the smp queue. Please see the [[Batch_System_PBSPro_(Hawk)#Node_types|node types]], [[Batch_System_PBSPro_(Hawk)#Examples_for_starting_batch_jobs:|examples]] and [[Batch_System_PBSPro_(Hawk)#smp|smp]] sections of the [[Batch_System_PBSPro_(Hawk)|batch system documentation]].


= Remote Visualization =
== Remote Visualization ==
Since Hawk is not equipped with Graphic Hardware, remote visualization has either to be executed via the Vulcan cluster or via locally installed clients.
Since Hawk is not equipped with Graphic Hardware, remote visualization has either to be executed via the Vulcan cluster or via locally installed clients.


== Paraview ==
=== Paraview ===


ParaView is installed on Hawk with mpi support for parallel execution and rendering on the compute nodes. The installation is available via the module environment.
ParaView is installed on Hawk with mpi support for parallel execution and rendering on the compute nodes. The installation is available via the module environment.
Line 11: Line 11:
module load paraview/server/<version>
module load paraview/server/<version>
</pre>
</pre>
To enable parallel data processing and rendering on the CPU Paraview is installed with the mesa llvm gallium pipe. This means the qt based graphical user interface (GUI) is not available. Scripted usage is possible via  
To enable parallel data processing and rendering on the CPU Paraview is installed with the mesa llvm gallium pipe. This means the Qt based graphical user interface (GUI) i.e. the <code>paraview</code> command and with that the ParaView client is not available. Scripted usage is possible via <code>pvbatch</code>. For interactive parallel post-processing and visualization <code>pvserver</code> has to be used.
 
==== Client-Server Execution ====
Graphical ParaView clients matching the server versions on Hawk are installed on the [[Vulcan|Vulcan]] cluster. For efficient connection we recommend to use the vis_via_vnc.sh script available via the VirtualGL module on Vulcan.
<pre>
<pre>
pvbatch
ssh vulcan.hww.hlrs.de
module load tools/VirtualGL
vis_via_vnc.sh
</pre>
</pre>
For interactive parallel post-processing and visualization
Please note that this will reserve a standard visualization node in Vulcan for one hour. Additional nodes with other graphic hardware are available and the reservation time can of course be increased. for a full list of options please use
<pre>
<pre>
pvserver
vis_via_vnc.sh --help
</pre>
</pre>
has to be used.

Revision as of 22:45, 26 May 2021

For pre- and post-processing purposes with large memory requirements multi user smp nodes are available. These nodes are reachable via the smp queue. Please see the node types, examples and smp sections of the batch system documentation.

Remote Visualization

Since Hawk is not equipped with Graphic Hardware, remote visualization has either to be executed via the Vulcan cluster or via locally installed clients.

Paraview

ParaView is installed on Hawk with mpi support for parallel execution and rendering on the compute nodes. The installation is available via the module environment.

module load paraview/server/<version>

To enable parallel data processing and rendering on the CPU Paraview is installed with the mesa llvm gallium pipe. This means the Qt based graphical user interface (GUI) i.e. the paraview command and with that the ParaView client is not available. Scripted usage is possible via pvbatch. For interactive parallel post-processing and visualization pvserver has to be used.

Client-Server Execution

Graphical ParaView clients matching the server versions on Hawk are installed on the Vulcan cluster. For efficient connection we recommend to use the vis_via_vnc.sh script available via the VirtualGL module on Vulcan.

ssh vulcan.hww.hlrs.de
module load tools/VirtualGL
vis_via_vnc.sh

Please note that this will reserve a standard visualization node in Vulcan for one hour. Additional nodes with other graphic hardware are available and the reservation time can of course be increased. for a full list of options please use

vis_via_vnc.sh --help