- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -
HDF5 Extended Tests and Examples
From HLRS Platforms
Jump to navigationJump to search
HDF5 Extended Tests and Examples
Benchmark Description
This HDF5 example shows how to write a “chunked” dataset into a HDF5 file. Each process writes a “chunk” of data to a dataset.
Command line Options
command-line arguments -gridPoints_x number of grid points in x-direction -gridPoints_y number of grid points in y-direction -nProcs_x number of MPI ranks in x-direction -nProcs_y number of MPI ranks in y-direction -path path to the output directory (default: ./) -help print this help message
Example Output
1, 1, 1, 1, 3, 3, 3, 3,
1, 1, 1, 1, 3, 3, 3, 3,
1, 1, 1, 1, 3, 3, 3, 3,
2, 2, 2, 2, 4, 4, 4, 4,
2, 2, 2, 2, 4, 4, 4, 4,
2, 2, 2, 2, 4, 4, 4, 4
Resources
- https://support.hdfgroup.org/documentation/hdf5/latest/_intro_par_h_d_f5.html
- https://support.hdfgroup.org/documentation/hdf5/latest/_learn_basics.html
- https://support.hdfgroup.org/documentation/hdf5/latest/_getting_started.html
Build Environment
module use /opt/cray/pals/lmod/modulefiles/core
module load cray-pals
module swap PrgEnv-cray PrgEnv-gnu
module load cray-hdf5-parallel/1.14.3.1
make
Makefile
PROJECT = hdf5_example PROJECT_DIR = $(shell pwd) SRC_DIR = $(PROJECT_DIR)/src BIN_DIR = $(PROJECT_DIR)/bin INCLUDE_DIR =$(PROJECT_DIR)/include SRC_FILES := mod_globals.f90 mod_mpi.f90 mod_parse_commandline.f90 mod_helpers.f90 mod_hdf5_io.f90 hdf5_example.f90 HDF5_INCLUDE_DIR = ${HDF5_ROOT}/include HDF5_LIBRARY_DIR = ${HDF5_ROOT}/lib HDF5_LIBS = -lhdf5 -lhdf5_fortran FCFLAGS = -g -Wall DEBUGGING = T READFILE = T PP_FLAGS = -cpp FC = ftn # ============================================================================== FLAGS += $(PP_FLAGS) FLAGS += $(FCFLAGS) .PHONY: clean all lib #all: all: echo ${HDF5_INCLUDE_DIR} ,${HDF5_LIB_DIR} mkdir -p $(BIN_DIR) $(INCLUDE_DIR) cd $(SRC_DIR); \ $(FC) $(FLAGS) -I${HDF5_INCLUDE_DIR} -L${HDF5_LIBRARY_DIR} ${HDF5_LIBS} $(SRC_FILES) -o $(PROJECT); \ mv $(PROJECT) $(BIN_DIR); \ mv *.mod $(INCLUDE_DIR) ; \ clean: rm -rf $(BIN_DIR) $(INCLUDE_DIR)
Execution Script
#!/bin/bash #PBS -N bm035_Cray_HDF5 #PBS -l select=4:node_type=mi300a:mpiprocs=4 #PBS -l walltime=00:20:00 #PBS -q test module use /opt/cray/pals/lmod/modulefiles/core module load cray-pals module swap PrgEnv-cray PrgEnv-gnu module load cray-hdf5-parallel/1.14.3.1 NPROCS=16 GRIDPOINTS_X=100000 GRIDPOINTS_Y=10000 NPROCS_X=4 NPROCS_Y=4 EXEDIR=/zhome/academic/HLRS/hlrs/hpchppof/hunter/hunter_acceptance/Benchmarks_Performance/bm035_Cray_HDF5/bin OUTPUT_DIR=/lustre/hpe/ws12/ws12.a/ws/hpchppof-hunter_acceptance/bm035_Cray_HDF5 mpirun -np ${NPROCS} --ppn 4 ${EXEDIR}/hdf5_example -gridPoints_x ${GRIDPOINTS_X} -gridPoints_y ${GRIDPOINTS_Y} -nProcs_x ${NPROCS_X} -nProcs_y ${NPROCS_Y} -path ${OUTPUT_DIR}
Execution Results
Commandline arguments: gridPoints_x: 100000 gridPoints_y: 10000 nProcs_x: 4 nProcs_y: 4 Path: /lustre/hpe/ws12/ws12.a/ws/hpchppof-hunter_acceptance/bm035_Cray_HDF5 --------------- MPI-IO Hints --------------- romio_cb_pfr = disable romio_cb_fr_types = aar cb_align = 2 cb_buffer_size = 16777216 romio_cb_fr_alignment = 1 romio_cb_ds_threshold = 0 romio_cb_alltoall = automatic romio_cb_read = automatic romio_cb_write = automatic romio_no_indep_rw = false romio_ds_write = automatic ind_wr_buffer_size = 524288 romio_ds_read = disable ind_rd_buffer_size = 4194304 direct_io = false striping_factor = 8 striping_unit = 4194304 overstriping_factor = 0 romio_lustre_start_iodevice = -1 aggregator_placement_stride = -1 abort_on_rw_error = disable cb_config_list = *:* cb_nodes = 8 romio_filesystem_type = CRAY ADIO: -------------------------------------------- --------------------HDF5 Benchmark -------------------- Number of MPI processes : 16 Number of gridpoints in x-direction : 100000 Number of gridpoints in y-direction : 10000 Total I/O amount in MiB : 7629.39 Time in sec : 3.43 I/O bandwidth in MiB/s : 2226.91