- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -

Libraries (Hunter): Difference between revisions

From HLRS Platforms
Jump to navigationJump to search
(Created page with "On Hunter it is strongly recommended to use optimized libraries whenever possible. == Numerical Libraries == === LibSci_ACC === ... === FFTW === ... === Iterative Refinement Toolkit (IRT) === ... == I/O Libraries == === Cray-NetCDF === ... == AI-Frameworks ==")
 
No edit summary
 
Line 1: Line 1:
On Hunter it is strongly recommended to use optimized libraries whenever possible.
<span id="io-libraries"></span>
== I/O Libraries ==
 
For parallel I/O three libraries are provided along with the Cray Programming environment
 
* Cray HDF5
* Cray NetCDF
* Cray parallel Netcdf
 
<span id="netcdf"></span>
=== NetCDF ===
 
On Hunter the [https://cpe.ext.hpe.com/docs/latest/csml/index.html#cray-netcdf Cray NetCDF] version of the [https://www.unidata.ucar.edu/software/netcdf/ Network Common Data Form (NetCDF)] library is installed. Latest release notes can be found at https://cpe.ext.hpe.com/docs/latest/csml/cray_parallel_netcdf.html.
 
 
-----
 
To load use the library load the following modules
 
<syntaxhighlight lang="bash">module load cray-hdf5-parallel/1.14.3.1
module load cray-netcdf-hdf5parallel/4.9.0.13</syntaxhighlight>
Make sure the include- and library-paths are provided to the compiler and linker. Within the Cray programming Environment this can be done by providing the <code>-l</code> and <code>-I</code> options on the command line.
 
<pre>&lt;compiler_wrapper&gt; -L${NETCDF_DIR}/lib/ -I${NETCDF_DIR}/include -lnetcdf -lnetcdff &lt;your_sources&gt;</pre>
<span id="fortran-example"></span>
==== Fortran Example ====
 
Here we provide a very simple example for the usage of NetCDF in Fortran.
 
<pre class="title="netcdf_io.f90"">Program netcdf_io
 
  ! Use mpi and netcdf --------------------------------------------------------
  Use mpi
  Use netcdf
 
  Implicit None
 
  ! Init kind parameters ------------------------------------------------------
  Integer, Parameter                :: ik=4, rk=8
  ! Some data -----------------------------------------------------------------
  Real(kind=rk), Target            :: Data_out(4, 8)
  ! Variables for netcdf ------------------------------------------------------
  Integer(kind=ik)                  :: ncid
  Integer(kind=ik)                  :: xDimID
  Integer(kind=ik)                  :: yDimID
  Integer(kind=ik)                  :: varID
  Integer(kind=ik)                  :: DimIDs(2)
  Integer(kind=ik)                  :: ncError
  Integer(kind=ik)                  :: Start(2)
  ! Variables for mpi ---------------------------------------------------------
  Integer(kind=ik)                  :: mpiError,rank


== Numerical Libraries ==
  ! MPI init ------------------------------------------------------------------
  Call mpi_init(mpiError)
  Call mpi_comm_rank(MPI_COMM_WORLD,rank,mpiError)


=== LibSci_ACC ===
  ! Tell who's there and init data --------------------------------------------
...
  Write(*,*)&quot;Rank&quot;,rank,&quot;netcdf_io ... &quot;
 
  data_out = Real(rank)*0.5_rk


=== FFTW ===
  ! Create a netcdf file ------------------------------------------------------
...
  ncError = nf90_create( &quot;netcdf_io.nc&quot;, &amp;
      Ior(NF90_NETCDF4, NF90_MPIIO), &amp;
      ncid, comm=MPI_COMM_WORLD, info=MPI_INFO_NULL)


=== Iterative Refinement Toolkit (IRT) ===
  ! Define global dimensions --------------------------------------------------
...
  ncError = nf90_def_dim(ncid, &quot;x&quot;, Int(16,4), xDimID)
  ncError = nf90_def_dim(ncid, &quot;y&quot;, Int( 8,4), yDimID)


== I/O Libraries ==
  DimIDs = (/xDimID, yDimID/)
 
  ! Define a netcdf variable --------------------------------------------------
  ncError = nf90_def_var(ncid, &quot;data&quot;, NF90_DOUBLE, DimIDs, varID, &amp;
      chunksizes=(/4_4,8_4/) )
 
  ! Finalize netcdf definitions -----------------------------------------------
  ncError = nf90_enddef(ncid)
 
  ! Set local offset per rank -------------------------------------------------
  Start = (/ rank*4_4 , 0_4 /) + 1_4
 
  ! Write data ----------------------------------------------------------------
  ncError = nf90_put_var(ncid, varID, Data_out, start=Start, count=(/4_4,8_4/))
 
  ! Close file ----------------------------------------------------------------
  ncError = nf90_close(ncid)
 
  ! Goodbye and finalize ------------------------------------------------------
  Write(*,*)&quot;Rank&quot;,rank,&quot;done.&quot;
 
  Call mpi_finalize(mpiError)
 
End Program netcdf_io</pre>
To build an executable of the following example use the programming environment of your choise, in our example we stick to the default one.
 
<pre>#!/bin/bash
ftn -L${NETCDF_DIR}/lib/ -I${NETCDF_DIR}/include -lnetcdf -lnetcdff netcdf_io.f90 -o netcdf_io</pre>
The demo can be executed by submitting the following PBS script batch script can be used.
 
<pre class="title="netcdf_test.pbs"">#!/bin/bash
#PBS -N Test_NetCDF
#PBS -l select=1:node_type=mi300a:mpiprocs=4
#PBS -l walltime=00:01:00
#PBS -q test
 
cd $PBS_O_WORKDIR
 
module load cray-hdf5-parallel/1.14.3.1
module load cray-netcdf-hdf5parallel/4.9.0.13
 
mpiexec -n 4 --ppn 4 ./netcdf_io
 
ncdump netcdf_io.nc &gt; netcdf_io.nc.dump</pre>
As can be seen below, four mpi-ranks produce a single netcdf file with each rank writing a rectangular chunk in x-direction of the contained 2D-datafield.
 
<pre>netcdf netcdf_io {
dimensions:
    x = 16 ;
    y = 8 ;
variables:
    double data(y, x) ;
data:


=== Cray-NetCDF ===
data =
...
  0, 0, 0, 0, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1.5, 1.5, 1.5, 1.5,
  0, 0, 0, 0, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1.5, 1.5, 1.5, 1.5,
  0, 0, 0, 0, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1.5, 1.5, 1.5, 1.5,
  0, 0, 0, 0, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1.5, 1.5, 1.5, 1.5,
  0, 0, 0, 0, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1.5, 1.5, 1.5, 1.5,
  0, 0, 0, 0, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1.5, 1.5, 1.5, 1.5,
  0, 0, 0, 0, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1.5, 1.5, 1.5, 1.5,
  0, 0, 0, 0, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1.5, 1.5, 1.5, 1.5 ;
}</pre>
The result output can be generated from the generated file <code>netcdf.demo.nc</code> with the <code>ncdump</code> tool.


== AI-Frameworks ==
<pre>ncdump netcdf_io.nc</pre>
A more sophisticated, configurable example which also determines IO-bandwidth can be found [[NetCDF%20Extended%20Tests%20and%20Examples|NetCDF Extended Tests and Examples]]

Latest revision as of 09:18, 24 January 2025

I/O Libraries

For parallel I/O three libraries are provided along with the Cray Programming environment

  • Cray HDF5
  • Cray NetCDF
  • Cray parallel Netcdf

NetCDF

On Hunter the Cray NetCDF version of the Network Common Data Form (NetCDF) library is installed. Latest release notes can be found at https://cpe.ext.hpe.com/docs/latest/csml/cray_parallel_netcdf.html.



To load use the library load the following modules

module load cray-hdf5-parallel/1.14.3.1
module load cray-netcdf-hdf5parallel/4.9.0.13

Make sure the include- and library-paths are provided to the compiler and linker. Within the Cray programming Environment this can be done by providing the -l and -I options on the command line.

<compiler_wrapper> -L${NETCDF_DIR}/lib/ -I${NETCDF_DIR}/include -lnetcdf -lnetcdff <your_sources>

Fortran Example

Here we provide a very simple example for the usage of NetCDF in Fortran.

Program netcdf_io

  ! Use mpi and netcdf --------------------------------------------------------
  Use mpi
  Use netcdf

  Implicit None

  ! Init kind parameters ------------------------------------------------------
  Integer, Parameter                :: ik=4, rk=8
  ! Some data -----------------------------------------------------------------
  Real(kind=rk), Target             :: Data_out(4, 8)
  ! Variables for netcdf ------------------------------------------------------
  Integer(kind=ik)                  :: ncid
  Integer(kind=ik)                  :: xDimID
  Integer(kind=ik)                  :: yDimID
  Integer(kind=ik)                  :: varID
  Integer(kind=ik)                  :: DimIDs(2)
  Integer(kind=ik)                  :: ncError
  Integer(kind=ik)                  :: Start(2)
  ! Variables for mpi ---------------------------------------------------------
  Integer(kind=ik)                  :: mpiError,rank

  ! MPI init ------------------------------------------------------------------
  Call mpi_init(mpiError)
  Call mpi_comm_rank(MPI_COMM_WORLD,rank,mpiError)

  ! Tell who's there and init data --------------------------------------------
  Write(*,*)"Rank",rank,"netcdf_io ... "
  
  data_out = Real(rank)*0.5_rk

  ! Create a netcdf file ------------------------------------------------------
  ncError = nf90_create( "netcdf_io.nc", &
       Ior(NF90_NETCDF4, NF90_MPIIO), &
       ncid, comm=MPI_COMM_WORLD, info=MPI_INFO_NULL)

  ! Define global dimensions --------------------------------------------------
  ncError = nf90_def_dim(ncid, "x", Int(16,4), xDimID)
  ncError = nf90_def_dim(ncid, "y", Int( 8,4), yDimID)

  DimIDs = (/xDimID, yDimID/)

  ! Define a netcdf variable --------------------------------------------------
  ncError = nf90_def_var(ncid, "data", NF90_DOUBLE, DimIDs, varID, &
       chunksizes=(/4_4,8_4/) )

  ! Finalize netcdf definitions -----------------------------------------------
  ncError = nf90_enddef(ncid)

  ! Set local offset per rank -------------------------------------------------
  Start = (/ rank*4_4 , 0_4 /) + 1_4

  ! Write data ----------------------------------------------------------------
  ncError = nf90_put_var(ncid, varID, Data_out, start=Start, count=(/4_4,8_4/))

  ! Close file ----------------------------------------------------------------
  ncError = nf90_close(ncid)

  ! Goodbye and finalize ------------------------------------------------------
  Write(*,*)"Rank",rank,"done."

  Call mpi_finalize(mpiError)

End Program netcdf_io

To build an executable of the following example use the programming environment of your choise, in our example we stick to the default one.

#!/bin/bash
ftn -L${NETCDF_DIR}/lib/ -I${NETCDF_DIR}/include -lnetcdf -lnetcdff netcdf_io.f90 -o netcdf_io

The demo can be executed by submitting the following PBS script batch script can be used.

#!/bin/bash
#PBS -N Test_NetCDF
#PBS -l select=1:node_type=mi300a:mpiprocs=4
#PBS -l walltime=00:01:00
#PBS -q test 

cd $PBS_O_WORKDIR

module load cray-hdf5-parallel/1.14.3.1
module load cray-netcdf-hdf5parallel/4.9.0.13

mpiexec -n 4 --ppn 4 ./netcdf_io

ncdump netcdf_io.nc > netcdf_io.nc.dump

As can be seen below, four mpi-ranks produce a single netcdf file with each rank writing a rectangular chunk in x-direction of the contained 2D-datafield.

netcdf netcdf_io {
dimensions:
    x = 16 ;
    y = 8 ;
variables:
    double data(y, x) ;
data:

 data =
  0, 0, 0, 0, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1.5, 1.5, 1.5, 1.5,
  0, 0, 0, 0, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1.5, 1.5, 1.5, 1.5,
  0, 0, 0, 0, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1.5, 1.5, 1.5, 1.5,
  0, 0, 0, 0, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1.5, 1.5, 1.5, 1.5,
  0, 0, 0, 0, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1.5, 1.5, 1.5, 1.5,
  0, 0, 0, 0, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1.5, 1.5, 1.5, 1.5,
  0, 0, 0, 0, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1.5, 1.5, 1.5, 1.5,
  0, 0, 0, 0, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1.5, 1.5, 1.5, 1.5 ;
}

The result output can be generated from the generated file netcdf.demo.nc with the ncdump tool.

ncdump netcdf_io.nc

A more sophisticated, configurable example which also determines IO-bandwidth can be found NetCDF Extended Tests and Examples