- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -
ISV Usage: Difference between revisions
No edit summary |
|||
(17 intermediate revisions by 2 users not shown) | |||
Line 70: | Line 70: | ||
###starting fluent with OpenMPI: | ###starting fluent with OpenMPI: | ||
fluent 3ddp -mpi=openmpi -t${NUMPROCS} -cnf=machineslots -g -i $TXT > log 2>&1 | fluent 3ddp -mpi=openmpi -t${NUMPROCS} -cnf=machineslots -pib.ofed -g -i $TXT > log 2>&1 | ||
</pre> | </pre> | ||
==== ANSYS Mechanical ==== | |||
<pre> | |||
#!/bin/bash | |||
# similar to "qgen -n 1 ansys_mapdl -i wing.dat -o wing_output" | |||
#PBS -l select=1 | |||
# | |||
#PBS -l walltime=00:05:00 | |||
#PBS -N ansys_mapdl | |||
#PBS -k eod | |||
echo "[`date +%Y-%m-%dT%H:%M:%S`] starting Job ${PBS_JOBID%%.*}" | |||
module load cae | |||
module load ansys/23.2 | |||
module load ansys/lic/academic #lic only for academic usage! | |||
cd ${PBS_O_WORKDIR} | |||
cp ${AWP_ROOT}/ansys/site/ansys/mapdl/core/examples/wing.dat . # copy some example input | |||
mapdl -b -i wing.dat -o wing_output | |||
echo "[`date +%Y-%m-%dT%H:%M:%S`] finishing Job ${PBS_JOBID%%.*}" | |||
</pre> | |||
also see [[CAE_utilities#ansys_(ANSYS_mechanical)|qgen usage examples]] | |||
==== LS-Dyna ==== | |||
(This section moved here since LSTC was acquired by ANSYS) | |||
see [[CAE_utilities#dyna_(LS-Dyna)|qgen usage examples]] | |||
=== SIEMENS === | === SIEMENS === | ||
Line 89: | Line 122: | ||
#module for academic usage only | #module for academic usage only | ||
module load siemens/lic | module load siemens/lic/academic | ||
module load siemens/starccm/14.06.012-r8 | module load siemens/starccm/14.06.012-r8 | ||
Line 96: | Line 129: | ||
np=$[ `wc -l <${PBS_NODEFILE}` ] | np=$[ `wc -l <${PBS_NODEFILE}` ] | ||
starccm+ -batch run -np ${np} -mpidriver hpe -batchsystem pbs mysimfile@meshed.sim >&outputrun.txt | starccm+ -batch run -np ${np} -noconnect -mpidriver hpe -batchsystem pbs mysimfile@meshed.sim >&outputrun.txt | ||
# or ... -mpidriver openmpi ... | # or ... -mpidriver openmpi ... | ||
</pre> | </pre> | ||
==== STAR-CCM+ - Connecting local client with server ==== | |||
In order to perform a connection between a local client and the STAR-CCM+ server on the compute node, please do the following: | |||
Job submission skript has to be changed using the BPORT option and the portrange, e.g.: | |||
<pre> | |||
#!/bin/bash | |||
################start test with BPORT true | |||
#PBS -l select=2:mpiprocs=128:BPORT=true | |||
#PBS -l walltime=01:00:00 | |||
#PBS -N nameofjob | |||
#PBS -k oed | |||
#module for academic usage only | |||
module load siemens/lic/academic | |||
module load siemens/starccm/14.06.012-r8 | |||
cd $PBS_O_WORKDIR | |||
np=$[ `wc -l <${PBS_NODEFILE}` ] | |||
###save machine list in file: | |||
cat $PBS_NODEFILE > machines | |||
#starccm+ job with port definition: | |||
starccm+ -batch run -np ${np} -portrange 47827-47827 -mpidriver hpe -batchsystem pbs mysimfile@meshed.sim >&outputrun.txt | |||
# or ... -mpidriver openmpi ... | |||
</pre> | |||
If job starts, one can establish a tunnel to the master node, which is the first node in the $PBS_NODEFILE, stored in the file <code>machines</code> via skript. | |||
For this, open a terminal on your local machine and do the following: | |||
<pre> | |||
ssh -J <yourusernameathlrs>@hawk.hww.hlrs.de -o HostKeyAlias=hawk.hww.hlrs.de -N -L localhost:10001:127.0.0.1:47827 <yourusernameathlrs>@<firstnode> | |||
</pre> | |||
with <code><firstnode></code> e.g. r38c4t5n2, and with <code><yourusernameathlrs></code> which is your username at HLRS. | |||
== VULCAN == | == VULCAN == | ||
Please check beforehand via [[Application_software_packages]], if you are allowed to use the installed ISV packages. | Please check beforehand via [[Application_software_packages]], if you are allowed to use the installed ISV packages. | ||
Line 117: | Line 184: | ||
# load module | # load module | ||
module load | module load ansys/22.1 | ||
#lic only for academic usage: | |||
module load ansys/lic/academic | |||
# change into the specific working directory | # change into the specific working directory | ||
Line 131: | Line 200: | ||
</pre> | </pre> | ||
==== FLUENT ==== | ==== FLUENT ==== | ||
For FLUENT one can use the mpi-auto-select option, which can be activated via <code>-pmpi-auto-selected</code>. A typical batch script could look like this: | For FLUENT one can use the mpi-auto-select option, which can be activated via <code>-pmpi-auto-selected</code>. A typical batch script could look like this: | ||
Line 140: | Line 210: | ||
#PBS -k eod | #PBS -k eod | ||
#module | # load module | ||
module load | module load ansys/22.1 | ||
#lic only for academic usage: | |||
module load ansys/lic/academic | |||
# change into the specific working directory | # change into the specific working directory | ||
Line 154: | Line 226: | ||
fluent 3ddp -pmpi-auto-selected -t${np} -pinfiniband -cnf=machines -g < $DEF > mylog | fluent 3ddp -pmpi-auto-selected -t${np} -pinfiniband -cnf=machines -g < $DEF > mylog | ||
</pre> | |||
==== ANSYS mechanical ==== | |||
<pre> | |||
#!/bin/bash | |||
# similar to "qgen -n 1:nodetype=genoa ansys_mapdl -i wing.dat -o wing_output" (uncomment cp) | |||
#PBS -l select=1:nodetype=genoa | |||
# | |||
#PBS -l walltime=00:05:00 | |||
#PBS -N ansys_mapdl | |||
#PBS -k eod | |||
echo "[`date +%Y-%m-%dT%H:%M:%S`] starting Job ${PBS_JOBID%%.*}" | |||
module load cae | |||
module load ansys/23.2 | |||
module load ansys/lic/academic #lic only for academic usage! | |||
cd ${PBS_O_WORKDIR} | |||
cp ${AWP_ROOT}/ansys/site/ansys/mapdl/core/examples/wing.dat . # copy input data | |||
mapdl -b -i wing.dat -o wing_output | |||
echo "[`date +%Y-%m-%dT%H:%M:%S`] finishing Job ${PBS_JOBID%%.*}" | |||
</pre> | </pre> | ||
also see [[CAE_utilities#ansys_(ANSYS_mechanical)|qgen usage examples]] | |||
==== LS-Dyna ==== | |||
(This section moved here since LSTC was acquired by ANSYS) | |||
see [[CAE_utilities#dyna_(LS-Dyna)|qgen usage examples]] | |||
=== SIEMENS === | === SIEMENS === | ||
Please check via module environment <code>module avail</code>, which versions are currently available for STAR-CCM+. A typical batch submission script example is given below. | Please check via module environment <code>module avail</code>, which versions are currently available for STAR-CCM+. A typical batch submission script example is given below. | ||
Line 172: | Line 278: | ||
#lic only for academic usage: | #lic only for academic usage: | ||
module load | module load siemens/lic/academic | ||
#module | |||
module load siemens/starccm/17.04.007-r8 | |||
# change into the specific working directory | |||
cd $PBS_O_WORKDIR | |||
# get num procs | |||
np=$[ `wc -l <${PBS_NODEFILE}` ] | |||
starccm+ -batch macro.java -np ${np} -noconnect -mpidriver platform -batchsystem pbs mysimfile.sim > mylog 2>&1 | |||
### or skip option -mpidriver platform, then starccm+ chooses the default setting | |||
</pre> | |||
==== STAR-CCM+ - Connecting local client with server ==== | |||
In order to perform a connection between a local client and the STAR-CCM+ server on the compute node, please do the following: | |||
Job submission skript has to be changed using the BPORT option and the portrange, e.g.: | |||
<pre> | |||
#!/bin/bash | |||
### 2 haswell nodes with each: 128gb ram and 24cores allowing 24 mpiprocs per node, with BPORT=true | |||
#PBS -l select=2:node_type=hsw:node_type_mem=128gb:node_type_core=24c:mpiprocs=24:BPORT=true,walltime=00:25:00 | |||
#PBS -N myjobname | |||
#PBS -k eod | |||
#lic only for academic usage: | |||
module load siemens/lic/academic | |||
#module | #module | ||
module load | module load siemens/starccm/17.04.007-r8 | ||
# change into the specific working directory | # change into the specific working directory | ||
cd $PBS_O_WORKDIR | cd $PBS_O_WORKDIR | ||
###save machine list in file: | |||
cat $PBS_NODEFILE > machines | |||
# get num procs | # get num procs | ||
np=$[ `wc -l <${PBS_NODEFILE}` ] | np=$[ `wc -l <${PBS_NODEFILE}` ] | ||
starccm+ -batch macro.java -np ${np} -mpidriver platform -batchsystem pbs mysimfile.sim > mylog 2>&1 | starccm+ -batch macro.java -np ${np} -portrange 47827-47827 -mpidriver platform -batchsystem pbs mysimfile.sim > mylog 2>&1 | ||
### or skip option -mpidriver platform, then starccm+ chooses the default setting | ### or skip option -mpidriver platform, then starccm+ chooses the default setting | ||
</pre> | </pre> | ||
If job starts, one can establish a tunnel to the master node, which is the first node in the $PBS_NODEFILE, stored in the file <code>machines</code> via skript. | |||
For this, open a terminal on your local machine and do the following: | |||
<pre> | |||
ssh -J <yourusernameathlrs>@vulcan.hww.hlrs.de -o HostKeyAlias=vulcan.hww.hlrs.de -N -L localhost:10001:127.0.0.1:47827 <yourusernameathlrs>@<firstnode> | |||
</pre> | |||
with <code><firstnode></code> e.g. n122202, and with <code><yourusernameathlrs></code> which is your username at HLRS. |
Latest revision as of 08:55, 27 September 2024
This wiki page guides you on how to use ISV codes on our systems.
HAWK
Please check beforehand via Application_software_packages, if you are allowed to use the installed ISV packages.
ANSYS
Please check via module environment (Module_environment(Hawk)), which versions are currently available. Typical batch submission script examples are given below.
CFX
On HAWK we currently provide two methods to run CFX in parallel. One can choose between two methods:
-start-method "HMPT MPI Distributed Parallel"
is utilising HMPT. For more details please see MPI(Hawk)-start-method "Open MPIHAWK Distributed Parallel"
uses a recent OpenMPI version.
A typical batch script could look like this:
#!/bin/bash ###asking for 2 nodes for 20 minutes #PBS -l select=2:mpiprocs=128 #PBS -l walltime=00:20:00 #PBS -N nameOfJob #PBS -k eod ###load module module load ansys/19.5 ###change to the current working directory cd $PBS_O_WORKDIR ###save machine list in file: cat $PBS_NODEFILE > machines cpu_id=` cat machines` cpu_id=` echo $cpu_id | tr -t [:blank:] [,]` ###no we have comma seperated machine file list ###num processes num_cpu=`cat machines | wc -l` ###starting cfx with HMPT: cfx5solve -batch -def mydef.def -ccl myccl.ccl -parallel -par-dist $cpu_id -part ${num_cpu} -solver-double -start-method "HMPT MPI Distributed Parallel" > log 2>&1
FLUENT
For FLUENT we support a recent OPENMPI installation, which can be activated via -mpi=openmpi
. A typical batch script could look like this:
#!/bin/bash ###asking for 2 nodes for 20 minutes #PBS -l select=2:mpiprocs=128 #PBS -l walltime=00:20:00 #PBS -N nameOfJob #PBS -k eod ###load module module load ansys/21.1 ###change to the current working directory cd $PBS_O_WORKDIR ###save machine list in file: cat $PBS_NODEFILE > machines_orig cat machines_orig | cut -d "." -f1 > machines MPIPROCSPERNODE=128 awk -v NUM=$MPIPROCSPERNODE 'NR % NUM == 0' machines > machineslots ###num processes NUMPROCS=`cat machines | wc -l` ###path to definition file TXT=$PBS_O_WORKDIR/fluent-input.txt ###starting fluent with OpenMPI: fluent 3ddp -mpi=openmpi -t${NUMPROCS} -cnf=machineslots -pib.ofed -g -i $TXT > log 2>&1
ANSYS Mechanical
#!/bin/bash # similar to "qgen -n 1 ansys_mapdl -i wing.dat -o wing_output" #PBS -l select=1 # #PBS -l walltime=00:05:00 #PBS -N ansys_mapdl #PBS -k eod echo "[`date +%Y-%m-%dT%H:%M:%S`] starting Job ${PBS_JOBID%%.*}" module load cae module load ansys/23.2 module load ansys/lic/academic #lic only for academic usage! cd ${PBS_O_WORKDIR} cp ${AWP_ROOT}/ansys/site/ansys/mapdl/core/examples/wing.dat . # copy some example input mapdl -b -i wing.dat -o wing_output echo "[`date +%Y-%m-%dT%H:%M:%S`] finishing Job ${PBS_JOBID%%.*}"
also see qgen usage examples
LS-Dyna
(This section moved here since LSTC was acquired by ANSYS)
SIEMENS
Please check via module environment (Module_environment(Hawk)), which versions are currently available for STAR-CCM+. A typical batch submission script example is given below.
STAR-CCM+
On HAWK we currently provide two methods to run STAR-CCM+ in parallel. One can choose between two methods:
-mpidriver hpe
is utilising MPT. For more details please see MPI(Hawk)-mpidriver openmpi
uses a recent OpenMPI version.
A typical batch script could look like this:
#!/bin/bash ################start test #PBS -l select=2:mpiprocs=128 #PBS -l walltime=01:00:00 #PBS -N nameofjob #PBS -k oed #module for academic usage only module load siemens/lic/academic module load siemens/starccm/14.06.012-r8 cd $PBS_O_WORKDIR np=$[ `wc -l <${PBS_NODEFILE}` ] starccm+ -batch run -np ${np} -noconnect -mpidriver hpe -batchsystem pbs mysimfile@meshed.sim >&outputrun.txt # or ... -mpidriver openmpi ...
STAR-CCM+ - Connecting local client with server
In order to perform a connection between a local client and the STAR-CCM+ server on the compute node, please do the following: Job submission skript has to be changed using the BPORT option and the portrange, e.g.:
#!/bin/bash ################start test with BPORT true #PBS -l select=2:mpiprocs=128:BPORT=true #PBS -l walltime=01:00:00 #PBS -N nameofjob #PBS -k oed #module for academic usage only module load siemens/lic/academic module load siemens/starccm/14.06.012-r8 cd $PBS_O_WORKDIR np=$[ `wc -l <${PBS_NODEFILE}` ] ###save machine list in file: cat $PBS_NODEFILE > machines #starccm+ job with port definition: starccm+ -batch run -np ${np} -portrange 47827-47827 -mpidriver hpe -batchsystem pbs mysimfile@meshed.sim >&outputrun.txt # or ... -mpidriver openmpi ...
If job starts, one can establish a tunnel to the master node, which is the first node in the $PBS_NODEFILE, stored in the file machines
via skript.
For this, open a terminal on your local machine and do the following:
ssh -J <yourusernameathlrs>@hawk.hww.hlrs.de -o HostKeyAlias=hawk.hww.hlrs.de -N -L localhost:10001:127.0.0.1:47827 <yourusernameathlrs>@<firstnode>
with <firstnode>
e.g. r38c4t5n2, and with <yourusernameathlrs>
which is your username at HLRS.
VULCAN
Please check beforehand via Application_software_packages, if you are allowed to use the installed ISV packages.
ANSYS
Please check via module environment module avail
, which versions are currently available. Typical batch submission script examples are given below.
CFX
On VULCAN you can choose between different parallel run options. One can choose between the following methods:
-start-method "Intel MPI Distributed Parallel"
-start-method "Open MPI Distributed Parallel"
uses an OpenMPI version brought by ANSYS installation folder.
A typical batch script could look like this:
#!/bin/bash ### skylake example using 4 nodes with each: 40cores and 192gb of ram: #PBS -l select=4:node_type=skl:node_type_mem=192gb:node_type_core=40c:mpiprocs=40,walltime=01:00:00 #PBS -N myjobname #PBS -k eod # load module module load ansys/22.1 #lic only for academic usage: module load ansys/lic/academic # change into the specific working directory cd $PBS_O_WORKDIR cat $PBS_NODEFILE > machines cpu_id=` cat machines` cpu_id=` echo $cpu_id | tr -t [:blank:] [,]` #no we have comma seperated machine file list e.g. (n092902,n092902,n092902,n092902,n092902,n092902,n092902,n092902,n092902,n092902,n092902,...) # starting cfx5solve: cfx5solve -def my.def -batch -par -par-dist $cpu_id -start-method "Intel MPI Distributed Parallel"
FLUENT
For FLUENT one can use the mpi-auto-select option, which can be activated via -pmpi-auto-selected
. A typical batch script could look like this:
#!/bin/bash # skylake, 2 nodes with each: 192gb ram and 40 cores, allowing 40 mpiprocs per node #PBS -l select=2:node_type=skl:node_type_mem=192gb:node_type_core=40c:mpiprocs=40,walltime=00:10:00 #PBS -N myjobname #PBS -k eod # load module module load ansys/22.1 #lic only for academic usage: module load ansys/lic/academic # change into the specific working directory cd $PBS_O_WORKDIR # my def file DEF=$PBS_O_WORKDIR/myfile.in # my machines cat $PBS_NODEFILE > machines np=$[ `wc -l <${PBS_NODEFILE}` ] fluent 3ddp -pmpi-auto-selected -t${np} -pinfiniband -cnf=machines -g < $DEF > mylog
ANSYS mechanical
#!/bin/bash # similar to "qgen -n 1:nodetype=genoa ansys_mapdl -i wing.dat -o wing_output" (uncomment cp) #PBS -l select=1:nodetype=genoa # #PBS -l walltime=00:05:00 #PBS -N ansys_mapdl #PBS -k eod echo "[`date +%Y-%m-%dT%H:%M:%S`] starting Job ${PBS_JOBID%%.*}" module load cae module load ansys/23.2 module load ansys/lic/academic #lic only for academic usage! cd ${PBS_O_WORKDIR} cp ${AWP_ROOT}/ansys/site/ansys/mapdl/core/examples/wing.dat . # copy input data mapdl -b -i wing.dat -o wing_output echo "[`date +%Y-%m-%dT%H:%M:%S`] finishing Job ${PBS_JOBID%%.*}"
also see qgen usage examples
LS-Dyna
(This section moved here since LSTC was acquired by ANSYS)
SIEMENS
Please check via module environment module avail
, which versions are currently available for STAR-CCM+. A typical batch submission script example is given below.
STAR-CCM+
On VULCAN we provide the known standard methods to run STAR-CCM+ in parallel. One can choose between three methods depending on loaded STAR-CCM+ version:
-mpi platform
-mpi intel
-mpi openmpi
A typical batch script could look like this:
#!/bin/bash ### 2 haswell nodes with each: 128gb ram and 24cores allowing 24 mpiprocs per node #PBS -l select=2:node_type=hsw:node_type_mem=128gb:node_type_core=24c:mpiprocs=24,walltime=00:25:00 #PBS -N myjobname #PBS -k eod #lic only for academic usage: module load siemens/lic/academic #module module load siemens/starccm/17.04.007-r8 # change into the specific working directory cd $PBS_O_WORKDIR # get num procs np=$[ `wc -l <${PBS_NODEFILE}` ] starccm+ -batch macro.java -np ${np} -noconnect -mpidriver platform -batchsystem pbs mysimfile.sim > mylog 2>&1 ### or skip option -mpidriver platform, then starccm+ chooses the default setting
STAR-CCM+ - Connecting local client with server
In order to perform a connection between a local client and the STAR-CCM+ server on the compute node, please do the following: Job submission skript has to be changed using the BPORT option and the portrange, e.g.:
#!/bin/bash ### 2 haswell nodes with each: 128gb ram and 24cores allowing 24 mpiprocs per node, with BPORT=true #PBS -l select=2:node_type=hsw:node_type_mem=128gb:node_type_core=24c:mpiprocs=24:BPORT=true,walltime=00:25:00 #PBS -N myjobname #PBS -k eod #lic only for academic usage: module load siemens/lic/academic #module module load siemens/starccm/17.04.007-r8 # change into the specific working directory cd $PBS_O_WORKDIR ###save machine list in file: cat $PBS_NODEFILE > machines # get num procs np=$[ `wc -l <${PBS_NODEFILE}` ] starccm+ -batch macro.java -np ${np} -portrange 47827-47827 -mpidriver platform -batchsystem pbs mysimfile.sim > mylog 2>&1 ### or skip option -mpidriver platform, then starccm+ chooses the default setting
If job starts, one can establish a tunnel to the master node, which is the first node in the $PBS_NODEFILE, stored in the file machines
via skript.
For this, open a terminal on your local machine and do the following:
ssh -J <yourusernameathlrs>@vulcan.hww.hlrs.de -o HostKeyAlias=vulcan.hww.hlrs.de -N -L localhost:10001:127.0.0.1:47827 <yourusernameathlrs>@<firstnode>
with <firstnode>
e.g. n122202, and with <yourusernameathlrs>
which is your username at HLRS.