- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -

Difference between revisions of "ISV Usage"

From HLRS Platforms
Jump to navigationJump to search
(Created page with "This wiki page guides you how to use ISV codes on our systems. == HAWK == Please check beforehand via Application_software_packages, if you are allowed to use the install...")
 
Line 50: Line 50:
  
 
###load module
 
###load module
module load ansys/19.5
+
module load ansys/21.1
  
 
###change to current working directory
 
###change to current working directory
Line 56: Line 56:
  
 
###save machine list in file:
 
###save machine list in file:
cat $PBS_NODEFILE > machines
+
cat $PBS_NODEFILE > machines_orig
cat machines | cut -d "." -f1 > machines_mod
+
cat machines_orig | cut -d "." -f1 > machines
 +
 
 +
MPIPROCSPERNODE=128
 +
awk -v NUM=$MPIPROCSPERNODE 'NR % NUM == 0' machines > machineslots
  
###no we have comma seperated machine file list
 
  
 
###num processes
 
###num processes
num_cpu=`cat machines | wc -l`
+
NUMPROCS=`cat machines | wc -l`
  
 
###path to definition file
 
###path to definition file
DEF=$PBS_O_WORKDIR/myfile.in
+
TXT=$PBS_O_WORKDIR/fluent-input.txt
  
 
###starting fluent with OpenMPI:
 
###starting fluent with OpenMPI:
fluent 3ddp -mpi=openmpi -t${num_cpu} -cnf=machines_mod -g < $DEF > log 2>&1
+
fluent 3ddp -mpi=openmpi -t${NUMPROCS} -cnf=machineslots -g -i $TXT > log 2>&1
 
</pre>
 
</pre>

Revision as of 16:10, 10 June 2021

This wiki page guides you how to use ISV codes on our systems.

HAWK

Please check beforehand via Application_software_packages, if you are allowed to use the installed ISV packages.

ANSYS

Please check via module environment (Module_environment(Hawk)), which versions are currently available. Typical batch submission script examples are given below.

CFX

On HAWK we currently provide two methods to run CFX in parallel. One can choose between two methods:

  1. -start-method "HMPT MPI Distributed Parallel" is utilising HMPT. For more details please see MPI(Hawk)
  2. -start-method "Open MPIHAWK Distributed Parallel" uses a recent OpenMPI version.

A typicall batch script could look like this:

#!/bin/bash
###asking for 2 nodes for 20 minutes
#PBS -l select=2:mpiprocs=128
#PBS -l walltime=00:20:00
#PBS -N nameOfJob
#PBS -k eod

###load module
module load ansys/19.5

###change to current working directory
cd $PBS_O_WORKDIR

###save machine list in file:
cat $PBS_NODEFILE > machines

cpu_id=` cat machines`
cpu_id=` echo $cpu_id | tr -t [:blank:] [,]`
###no we have comma seperated machine file list

###num processes
num_cpu=`cat machines | wc -l`

###starting cfx with HMPT:
cfx5solve -batch -def mydef.def -ccl myccl.ccl -parallel -par-dist $cpu_id -part ${num_cpu} -solver-double -start-method "HMPT MPI Distributed Parallel" > log 2>&1

FLUENT

For FLUENT we support a recent OPENMPI installation, which can be activated via -mpi=openmpi. A typicall batch script could look like this:

#!/bin/bash
###asking for 2 nodes for 20 minutes
#PBS -l select=2:mpiprocs=128
#PBS -l walltime=00:20:00
#PBS -N nameOfJob
#PBS -k eod

###load module
module load ansys/21.1

###change to current working directory
cd $PBS_O_WORKDIR

###save machine list in file:
cat $PBS_NODEFILE > machines_orig
cat machines_orig | cut -d "." -f1 > machines

MPIPROCSPERNODE=128
awk -v NUM=$MPIPROCSPERNODE 'NR % NUM == 0' machines > machineslots


###num processes
NUMPROCS=`cat machines | wc -l`

###path to definition file
TXT=$PBS_O_WORKDIR/fluent-input.txt

###starting fluent with OpenMPI:
fluent 3ddp -mpi=openmpi -t${NUMPROCS} -cnf=machineslots -g -i $TXT > log 2>&1