- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

ISV Usage: Difference between revisions

From HLRS Platforms
Jump to navigationJump to search
(Created page with "This wiki page guides you how to use ISV codes on our systems. == HAWK == Please check beforehand via Application_software_packages, if you are allowed to use the install...")
 
No edit summary
(7 intermediate revisions by the same user not shown)
Line 1: Line 1:
This wiki page guides you how to use ISV codes on our systems.
This wiki page guides you on how to use ISV codes on our systems.


== HAWK ==
== HAWK ==
Line 9: Line 9:
#<code>-start-method "HMPT MPI Distributed Parallel"</code> is utilising HMPT. For more details please see [[MPI(Hawk)]]
#<code>-start-method "HMPT MPI Distributed Parallel"</code> is utilising HMPT. For more details please see [[MPI(Hawk)]]
#<code>-start-method "Open MPIHAWK Distributed Parallel"</code> uses a recent OpenMPI version.
#<code>-start-method "Open MPIHAWK Distributed Parallel"</code> uses a recent OpenMPI version.
A typicall batch script could look like this:
A typical batch script could look like this:
<pre>
<pre>
#!/bin/bash
#!/bin/bash
Line 21: Line 21:
module load ansys/19.5
module load ansys/19.5


###change to current working directory
###change to the current working directory
cd $PBS_O_WORKDIR
cd $PBS_O_WORKDIR


Line 40: Line 40:


==== FLUENT ====
==== FLUENT ====
For FLUENT we support a recent OPENMPI installation, which can be activated via <code>-mpi=openmpi</code>. A typicall batch script could look like this:
For FLUENT we support a recent OPENMPI installation, which can be activated via <code>-mpi=openmpi</code>. A typical batch script could look like this:
<pre>
<pre>
#!/bin/bash
#!/bin/bash
Line 50: Line 50:


###load module
###load module
module load ansys/19.5
module load ansys/21.1


###change to current working directory
###change to the current working directory
cd $PBS_O_WORKDIR
cd $PBS_O_WORKDIR


###save machine list in file:
###save machine list in file:
cat $PBS_NODEFILE > machines
cat $PBS_NODEFILE > machines_orig
cat machines | cut -d "." -f1 > machines_mod
cat machines_orig | cut -d "." -f1 > machines
 
MPIPROCSPERNODE=128
awk -v NUM=$MPIPROCSPERNODE 'NR % NUM == 0' machines > machineslots


###no we have comma seperated machine file list


###num processes
###num processes
num_cpu=`cat machines | wc -l`
NUMPROCS=`cat machines | wc -l`


###path to definition file
###path to definition file
TXT=$PBS_O_WORKDIR/fluent-input.txt
###starting fluent with OpenMPI:
fluent 3ddp -mpi=openmpi -t${NUMPROCS} -cnf=machineslots -g -i $TXT > log 2>&1
</pre>
=== SIEMENS ===
Please check via module environment ([[Module_environment(Hawk)]]), which versions are currently available for STAR-CCM+. A typical batch submission script example is given below.
==== STAR-CCM+ ====
On HAWK we currently provide two methods to run STAR-CCM+ in parallel. One can choose between two methods:
#<code>-mpidriver hpe</code> is utilising MPT. For more details please see [[MPI(Hawk)]]
#<code>-mpidriver openmpi</code> uses a recent OpenMPI version.
A typical batch script could look like this:
<pre>
#!/bin/bash
################start test
#PBS -l select=2:mpiprocs=128
#PBS -l walltime=01:00:00
#PBS -N nameofjob
#PBS -k oed
#module for academic usage only
module load siemens/lic
module load siemens/starccm/14.06.012-r8
cd $PBS_O_WORKDIR
np=$[ `wc -l <${PBS_NODEFILE}` ]
starccm+ -batch run -np ${np} -noconnect -mpidriver hpe -batchsystem pbs mysimfile@meshed.sim >&outputrun.txt
# or ... -mpidriver openmpi ...
</pre>
== VULCAN ==
Please check beforehand via [[Application_software_packages]], if you are allowed to use the installed ISV packages.
=== ANSYS ===
Please check via module environment <code>module avail</code>, which versions are currently available. Typical batch submission script examples are given below.
==== CFX ====
On VULCAN you can choose between different parallel run options. One can choose between the following methods:
#<code>-start-method "Intel MPI Distributed Parallel"</code>
#<code>-start-method "Open MPI Distributed Parallel"</code> uses an OpenMPI version brought by ANSYS installation folder.
A typical batch script could look like this:
<pre>
#!/bin/bash
### skylake example using 4 nodes with each: 40cores and 192gb of ram:
#PBS -l select=4:node_type=skl:node_type_mem=192gb:node_type_core=40c:mpiprocs=40,walltime=01:00:00
#PBS -N myjobname
#PBS -k eod
# load module
module load cae/ansys/19.3
# change into the specific working directory
cd $PBS_O_WORKDIR
cat $PBS_NODEFILE > machines
cpu_id=` cat machines`
cpu_id=` echo $cpu_id | tr -t [:blank:] [,]`
#no we have comma seperated machine file list e.g. (n092902,n092902,n092902,n092902,n092902,n092902,n092902,n092902,n092902,n092902,n092902,...)
# starting cfx5solve:
cfx5solve -def my.def -batch -par -par-dist $cpu_id -start-method "Intel MPI Distributed Parallel"
</pre>
==== FLUENT ====
For FLUENT one can use the mpi-auto-select option, which can be activated via <code>-pmpi-auto-selected</code>. A typical batch script could look like this:
<pre>
#!/bin/bash
# skylake, 2 nodes with each: 192gb ram and 40 cores, allowing 40 mpiprocs per node
#PBS -l select=2:node_type=skl:node_type_mem=192gb:node_type_core=40c:mpiprocs=40,walltime=00:10:00
#PBS -N myjobname
#PBS -k eod
#module
module load cae/ansys/21.1
# change into the specific working directory
cd $PBS_O_WORKDIR
# my def file
DEF=$PBS_O_WORKDIR/myfile.in
DEF=$PBS_O_WORKDIR/myfile.in


###starting fluent with OpenMPI:
# my machines
fluent 3ddp -mpi=openmpi -t${num_cpu} -cnf=machines_mod -g < $DEF > log 2>&1
cat $PBS_NODEFILE > machines
np=$[ `wc -l <${PBS_NODEFILE}` ]
 
fluent 3ddp -pmpi-auto-selected -t${np} -pinfiniband -cnf=machines -g < $DEF > mylog
 
</pre>
=== SIEMENS ===
Please check via module environment <code>module avail</code>, which versions are currently available for STAR-CCM+. A typical batch submission script example is given below.
==== STAR-CCM+ ====
On VULCAN we provide the known standard methods to run STAR-CCM+ in parallel. One can choose between three methods depending on loaded STAR-CCM+ version:
#<code>-mpi platform</code>
#<code>-mpi intel</code>
#<code>-mpi openmpi</code>
A typical batch script could look like this:
<pre>
#!/bin/bash
### 2 haswell nodes with each: 128gb ram and 24cores allowing 24 mpiprocs per node
#PBS -l select=2:node_type=hsw:node_type_mem=128gb:node_type_core=24c:mpiprocs=24,walltime=00:25:00
#PBS -N myjobname
#PBS -k eod
 
#lic only for academic usage:
module load cae/cdadapco/lic
#module
module load cae/cdadapco/starccm/15.02.007-r8
 
# change into the specific working directory
cd $PBS_O_WORKDIR
 
# get num procs
np=$[ `wc -l <${PBS_NODEFILE}` ]
 
starccm+ -batch macro.java -np ${np} -noconnect -mpidriver platform -batchsystem pbs mysimfile.sim > mylog 2>&1
### or skip option -mpidriver platform, then starccm+ chooses the default setting
</pre>
</pre>

Revision as of 10:07, 15 July 2021

This wiki page guides you on how to use ISV codes on our systems.

HAWK

Please check beforehand via Application_software_packages, if you are allowed to use the installed ISV packages.

ANSYS

Please check via module environment (Module_environment(Hawk)), which versions are currently available. Typical batch submission script examples are given below.

CFX

On HAWK we currently provide two methods to run CFX in parallel. One can choose between two methods:

  1. -start-method "HMPT MPI Distributed Parallel" is utilising HMPT. For more details please see MPI(Hawk)
  2. -start-method "Open MPIHAWK Distributed Parallel" uses a recent OpenMPI version.

A typical batch script could look like this:

#!/bin/bash
###asking for 2 nodes for 20 minutes
#PBS -l select=2:mpiprocs=128
#PBS -l walltime=00:20:00
#PBS -N nameOfJob
#PBS -k eod

###load module
module load ansys/19.5

###change to the current working directory
cd $PBS_O_WORKDIR

###save machine list in file:
cat $PBS_NODEFILE > machines

cpu_id=` cat machines`
cpu_id=` echo $cpu_id | tr -t [:blank:] [,]`
###no we have comma seperated machine file list

###num processes
num_cpu=`cat machines | wc -l`

###starting cfx with HMPT:
cfx5solve -batch -def mydef.def -ccl myccl.ccl -parallel -par-dist $cpu_id -part ${num_cpu} -solver-double -start-method "HMPT MPI Distributed Parallel" > log 2>&1

FLUENT

For FLUENT we support a recent OPENMPI installation, which can be activated via -mpi=openmpi. A typical batch script could look like this:

#!/bin/bash
###asking for 2 nodes for 20 minutes
#PBS -l select=2:mpiprocs=128
#PBS -l walltime=00:20:00
#PBS -N nameOfJob
#PBS -k eod

###load module
module load ansys/21.1

###change to the current working directory
cd $PBS_O_WORKDIR

###save machine list in file:
cat $PBS_NODEFILE > machines_orig
cat machines_orig | cut -d "." -f1 > machines

MPIPROCSPERNODE=128
awk -v NUM=$MPIPROCSPERNODE 'NR % NUM == 0' machines > machineslots


###num processes
NUMPROCS=`cat machines | wc -l`

###path to definition file
TXT=$PBS_O_WORKDIR/fluent-input.txt

###starting fluent with OpenMPI:
fluent 3ddp -mpi=openmpi -t${NUMPROCS} -cnf=machineslots -g -i $TXT > log 2>&1

SIEMENS

Please check via module environment (Module_environment(Hawk)), which versions are currently available for STAR-CCM+. A typical batch submission script example is given below.

STAR-CCM+

On HAWK we currently provide two methods to run STAR-CCM+ in parallel. One can choose between two methods:

  1. -mpidriver hpe is utilising MPT. For more details please see MPI(Hawk)
  2. -mpidriver openmpi uses a recent OpenMPI version.

A typical batch script could look like this:

#!/bin/bash
################start test
#PBS -l select=2:mpiprocs=128
#PBS -l walltime=01:00:00
#PBS -N nameofjob
#PBS -k oed

#module for academic usage only
module load siemens/lic
module load siemens/starccm/14.06.012-r8

cd $PBS_O_WORKDIR

np=$[ `wc -l <${PBS_NODEFILE}` ]

starccm+ -batch run -np ${np} -noconnect -mpidriver hpe -batchsystem pbs mysimfile@meshed.sim >&outputrun.txt
# or ... -mpidriver openmpi ...

VULCAN

Please check beforehand via Application_software_packages, if you are allowed to use the installed ISV packages.

ANSYS

Please check via module environment module avail, which versions are currently available. Typical batch submission script examples are given below.

CFX

On VULCAN you can choose between different parallel run options. One can choose between the following methods:

  1. -start-method "Intel MPI Distributed Parallel"
  2. -start-method "Open MPI Distributed Parallel" uses an OpenMPI version brought by ANSYS installation folder.

A typical batch script could look like this:

#!/bin/bash
### skylake example using 4 nodes with each: 40cores and 192gb of ram:
#PBS -l select=4:node_type=skl:node_type_mem=192gb:node_type_core=40c:mpiprocs=40,walltime=01:00:00
#PBS -N myjobname
#PBS -k eod

# load module
module load cae/ansys/19.3

# change into the specific working directory
cd $PBS_O_WORKDIR

cat $PBS_NODEFILE > machines
cpu_id=` cat machines`
cpu_id=` echo $cpu_id | tr -t [:blank:] [,]`
#no we have comma seperated machine file list e.g. (n092902,n092902,n092902,n092902,n092902,n092902,n092902,n092902,n092902,n092902,n092902,...)

# starting cfx5solve:
cfx5solve -def my.def -batch -par -par-dist $cpu_id -start-method "Intel MPI Distributed Parallel"

FLUENT

For FLUENT one can use the mpi-auto-select option, which can be activated via -pmpi-auto-selected. A typical batch script could look like this:

#!/bin/bash
# skylake, 2 nodes with each: 192gb ram and 40 cores, allowing 40 mpiprocs per node
#PBS -l select=2:node_type=skl:node_type_mem=192gb:node_type_core=40c:mpiprocs=40,walltime=00:10:00
#PBS -N myjobname
#PBS -k eod

#module
module load cae/ansys/21.1

# change into the specific working directory 
cd $PBS_O_WORKDIR

# my def file
DEF=$PBS_O_WORKDIR/myfile.in

# my machines
cat $PBS_NODEFILE > machines
np=$[ `wc -l <${PBS_NODEFILE}` ]

fluent 3ddp -pmpi-auto-selected -t${np} -pinfiniband -cnf=machines -g < $DEF > mylog 

SIEMENS

Please check via module environment module avail, which versions are currently available for STAR-CCM+. A typical batch submission script example is given below.

STAR-CCM+

On VULCAN we provide the known standard methods to run STAR-CCM+ in parallel. One can choose between three methods depending on loaded STAR-CCM+ version:

  1. -mpi platform
  2. -mpi intel
  3. -mpi openmpi

A typical batch script could look like this:

#!/bin/bash
### 2 haswell nodes with each: 128gb ram and 24cores allowing 24 mpiprocs per node
#PBS -l select=2:node_type=hsw:node_type_mem=128gb:node_type_core=24c:mpiprocs=24,walltime=00:25:00
#PBS -N myjobname
#PBS -k eod

#lic only for academic usage:
module load cae/cdadapco/lic
#module
module load cae/cdadapco/starccm/15.02.007-r8

# change into the specific working directory
cd $PBS_O_WORKDIR

# get num procs
np=$[ `wc -l <${PBS_NODEFILE}` ]

starccm+ -batch macro.java -np ${np} -noconnect -mpidriver platform -batchsystem pbs mysimfile.sim > mylog 2>&1
### or skip option -mpidriver platform, then starccm+ chooses the default setting