- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -
ISV Usage: Difference between revisions
No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
This wiki page guides you how to use ISV codes on our systems. | This wiki page guides you on how to use ISV codes on our systems. | ||
== HAWK == | == HAWK == | ||
Line 21: | Line 21: | ||
module load ansys/19.5 | module load ansys/19.5 | ||
###change to current working directory | ###change to the current working directory | ||
cd $PBS_O_WORKDIR | cd $PBS_O_WORKDIR | ||
Line 52: | Line 52: | ||
module load ansys/21.1 | module load ansys/21.1 | ||
###change to current working directory | ###change to the current working directory | ||
cd $PBS_O_WORKDIR | cd $PBS_O_WORKDIR | ||
Line 98: | Line 98: | ||
starccm+ -batch run -np $np -mpidriver hpe -batchsystem pbs mysimfile@meshed.sim >&outputrun.txt | starccm+ -batch run -np $np -mpidriver hpe -batchsystem pbs mysimfile@meshed.sim >&outputrun.txt | ||
# or ... -mpidriver openmpi ... | # or ... -mpidriver openmpi ... | ||
</pre> | |||
== VULCAN == | |||
Please check beforehand via [[Application_software_packages]], if you are allowed to use the installed ISV packages. | |||
=== ANSYS === | |||
Please check via module environment ([[Module_environment(Hawk)]]), which versions are currently available. Typical batch submission script examples are given below. | |||
==== CFX ==== | |||
On VULCAN you can choose between different parallel run options. One can choose between the following methods: | |||
#<code>-start-method "Intel MPI Distributed Parallel"</code> | |||
#<code>-start-method "Open MPI Distributed Parallel"</code> uses an OpenMPI version brought by ANSYS installation folder. | |||
A typical batch script could look like this: | |||
<pre> | |||
#!/bin/bash | |||
### skylake example using 4 nodes with each: 40cores and 192gb of ram: | |||
#PBS -l select=4:node_type=skl:node_type_mem=192gb:node_type_core=40c:mpiprocs=40,walltime=01:00:00 | |||
#PBS -N myjobname | |||
#PBS -k eod | |||
# load module | |||
module load cae/ansys/19.3 | |||
# change into specific working directory | |||
cd $PBS_O_WORKDIR | |||
cat $PBS_NODEFILE > machines | |||
cpu_id=` cat machines` | |||
cpu_id=` echo $cpu_id | tr -t [:blank:] [,]` | |||
#no we have comma seperated machine file list e.g. (n092902,n092902,n092902,n092902,n092902,n092902,n092902,n092902,n092902,n092902,n092902,...) | |||
# starting cfx5solve: | |||
cfx5solve -def my.def -batch -par -par-dist $cpu_id -start-method "Intel MPI Distributed Parallel" | |||
</pre> | </pre> |
Revision as of 08:21, 14 July 2021
This wiki page guides you on how to use ISV codes on our systems.
HAWK
Please check beforehand via Application_software_packages, if you are allowed to use the installed ISV packages.
ANSYS
Please check via module environment (Module_environment(Hawk)), which versions are currently available. Typical batch submission script examples are given below.
CFX
On HAWK we currently provide two methods to run CFX in parallel. One can choose between two methods:
-start-method "HMPT MPI Distributed Parallel"
is utilising HMPT. For more details please see MPI(Hawk)-start-method "Open MPIHAWK Distributed Parallel"
uses a recent OpenMPI version.
A typicall batch script could look like this:
#!/bin/bash ###asking for 2 nodes for 20 minutes #PBS -l select=2:mpiprocs=128 #PBS -l walltime=00:20:00 #PBS -N nameOfJob #PBS -k eod ###load module module load ansys/19.5 ###change to the current working directory cd $PBS_O_WORKDIR ###save machine list in file: cat $PBS_NODEFILE > machines cpu_id=` cat machines` cpu_id=` echo $cpu_id | tr -t [:blank:] [,]` ###no we have comma seperated machine file list ###num processes num_cpu=`cat machines | wc -l` ###starting cfx with HMPT: cfx5solve -batch -def mydef.def -ccl myccl.ccl -parallel -par-dist $cpu_id -part ${num_cpu} -solver-double -start-method "HMPT MPI Distributed Parallel" > log 2>&1
FLUENT
For FLUENT we support a recent OPENMPI installation, which can be activated via -mpi=openmpi
. A typicall batch script could look like this:
#!/bin/bash ###asking for 2 nodes for 20 minutes #PBS -l select=2:mpiprocs=128 #PBS -l walltime=00:20:00 #PBS -N nameOfJob #PBS -k eod ###load module module load ansys/21.1 ###change to the current working directory cd $PBS_O_WORKDIR ###save machine list in file: cat $PBS_NODEFILE > machines_orig cat machines_orig | cut -d "." -f1 > machines MPIPROCSPERNODE=128 awk -v NUM=$MPIPROCSPERNODE 'NR % NUM == 0' machines > machineslots ###num processes NUMPROCS=`cat machines | wc -l` ###path to definition file TXT=$PBS_O_WORKDIR/fluent-input.txt ###starting fluent with OpenMPI: fluent 3ddp -mpi=openmpi -t${NUMPROCS} -cnf=machineslots -g -i $TXT > log 2>&1
SIEMENS
Please check via module environment (Module_environment(Hawk)), which versions are currently available for STAR-CCM+. A typical batch submission script example is given below.
STAR-CCM+
On HAWK we currently provide two methods to run STAR-CCM+ in parallel. One can choose between two methods:
-mpidriver hpe
is utilising MPT. For more details please see MPI(Hawk)-mpidriver openmpi
uses a recent OpenMPI version.
A typical batch script could look like this:
#!/bin/bash ################start test #PBS -l select=2:mpiprocs=128 #PBS -l walltime=01:00:00 #PBS -N nameofjob #PBS -k oed #module for academic usage only module load siemens/lic module load siemens/starccm/14.06.012-r8 cd $PBS_O_WORKDIR np=$[ `wc -l <${PBS_NODEFILE}` ] starccm+ -batch run -np $np -mpidriver hpe -batchsystem pbs mysimfile@meshed.sim >&outputrun.txt # or ... -mpidriver openmpi ...
VULCAN
Please check beforehand via Application_software_packages, if you are allowed to use the installed ISV packages.
ANSYS
Please check via module environment (Module_environment(Hawk)), which versions are currently available. Typical batch submission script examples are given below.
CFX
On VULCAN you can choose between different parallel run options. One can choose between the following methods:
-start-method "Intel MPI Distributed Parallel"
-start-method "Open MPI Distributed Parallel"
uses an OpenMPI version brought by ANSYS installation folder.
A typical batch script could look like this:
#!/bin/bash ### skylake example using 4 nodes with each: 40cores and 192gb of ram: #PBS -l select=4:node_type=skl:node_type_mem=192gb:node_type_core=40c:mpiprocs=40,walltime=01:00:00 #PBS -N myjobname #PBS -k eod # load module module load cae/ansys/19.3 # change into specific working directory cd $PBS_O_WORKDIR cat $PBS_NODEFILE > machines cpu_id=` cat machines` cpu_id=` echo $cpu_id | tr -t [:blank:] [,]` #no we have comma seperated machine file list e.g. (n092902,n092902,n092902,n092902,n092902,n092902,n092902,n092902,n092902,n092902,n092902,...) # starting cfx5solve: cfx5solve -def my.def -batch -par -par-dist $cpu_id -start-method "Intel MPI Distributed Parallel"