- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -
Libraries (Hunter)
I/O Libraries
For parallel I/O three libraries are provided along with the Cray Programming environment
- Cray HDF5
- Cray NetCDF
- Cray parallel Netcdf
NetCDF
On Hunter the Cray NetCDF version of the Network Common Data Form (NetCDF) library is installed. Latest release notes can be found at https://cpe.ext.hpe.com/docs/latest/csml/cray_parallel_netcdf.html.
To load use the library load the following modules
module load cray-hdf5-parallel/1.14.3.1
module load cray-netcdf-hdf5parallel/4.9.0.13
Make sure the include- and library-paths are provided to the compiler and linker. Within the Cray programming Environment this can be done by providing the -l
and -I
options on the command line.
<compiler_wrapper> -L${NETCDF_DIR}/lib/ -I${NETCDF_DIR}/include -lnetcdf -lnetcdff <your_sources>
Fortran Example
Here we provide a very simple example for the usage of NetCDF in Fortran.
Program netcdf_io ! Use mpi and netcdf -------------------------------------------------------- Use mpi Use netcdf Implicit None ! Init kind parameters ------------------------------------------------------ Integer, Parameter :: ik=4, rk=8 ! Some data ----------------------------------------------------------------- Real(kind=rk), Target :: Data_out(4, 8) ! Variables for netcdf ------------------------------------------------------ Integer(kind=ik) :: ncid Integer(kind=ik) :: xDimID Integer(kind=ik) :: yDimID Integer(kind=ik) :: varID Integer(kind=ik) :: DimIDs(2) Integer(kind=ik) :: ncError Integer(kind=ik) :: Start(2) ! Variables for mpi --------------------------------------------------------- Integer(kind=ik) :: mpiError,rank ! MPI init ------------------------------------------------------------------ Call mpi_init(mpiError) Call mpi_comm_rank(MPI_COMM_WORLD,rank,mpiError) ! Tell who's there and init data -------------------------------------------- Write(*,*)"Rank",rank,"netcdf_io ... " data_out = Real(rank)*0.5_rk ! Create a netcdf file ------------------------------------------------------ ncError = nf90_create( "netcdf_io.nc", & Ior(NF90_NETCDF4, NF90_MPIIO), & ncid, comm=MPI_COMM_WORLD, info=MPI_INFO_NULL) ! Define global dimensions -------------------------------------------------- ncError = nf90_def_dim(ncid, "x", Int(16,4), xDimID) ncError = nf90_def_dim(ncid, "y", Int( 8,4), yDimID) DimIDs = (/xDimID, yDimID/) ! Define a netcdf variable -------------------------------------------------- ncError = nf90_def_var(ncid, "data", NF90_DOUBLE, DimIDs, varID, & chunksizes=(/4_4,8_4/) ) ! Finalize netcdf definitions ----------------------------------------------- ncError = nf90_enddef(ncid) ! Set local offset per rank ------------------------------------------------- Start = (/ rank*4_4 , 0_4 /) + 1_4 ! Write data ---------------------------------------------------------------- ncError = nf90_put_var(ncid, varID, Data_out, start=Start, count=(/4_4,8_4/)) ! Close file ---------------------------------------------------------------- ncError = nf90_close(ncid) ! Goodbye and finalize ------------------------------------------------------ Write(*,*)"Rank",rank,"done." Call mpi_finalize(mpiError) End Program netcdf_io
To build an executable of the following example use the programming environment of your choise, in our example we stick to the default one.
#!/bin/bash ftn -L${NETCDF_DIR}/lib/ -I${NETCDF_DIR}/include -lnetcdf -lnetcdff netcdf_io.f90 -o netcdf_io
The demo can be executed by submitting the following PBS script batch script can be used.
#!/bin/bash #PBS -N Test_NetCDF #PBS -l select=1:node_type=mi300a:mpiprocs=4 #PBS -l walltime=00:01:00 #PBS -q test cd $PBS_O_WORKDIR module load cray-hdf5-parallel/1.14.3.1 module load cray-netcdf-hdf5parallel/4.9.0.13 mpiexec -n 4 --ppn 4 ./netcdf_io ncdump netcdf_io.nc > netcdf_io.nc.dump
As can be seen below, four mpi-ranks produce a single netcdf file with each rank writing a rectangular chunk in x-direction of the contained 2D-datafield.
netcdf netcdf_io { dimensions: x = 16 ; y = 8 ; variables: double data(y, x) ; data: data = 0, 0, 0, 0, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1.5, 1.5, 1.5, 1.5, 0, 0, 0, 0, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1.5, 1.5, 1.5, 1.5, 0, 0, 0, 0, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1.5, 1.5, 1.5, 1.5, 0, 0, 0, 0, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1.5, 1.5, 1.5, 1.5, 0, 0, 0, 0, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1.5, 1.5, 1.5, 1.5, 0, 0, 0, 0, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1.5, 1.5, 1.5, 1.5, 0, 0, 0, 0, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1.5, 1.5, 1.5, 1.5, 0, 0, 0, 0, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1.5, 1.5, 1.5, 1.5 ; }
The result output can be generated from the generated file netcdf.demo.nc
with the ncdump
tool.
ncdump netcdf_io.nc
A more sophisticated, configurable example which also determines IO-bandwidth can be found NetCDF Extended Tests and Examples