- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -
GPI-2: Difference between revisions
No edit summary |
No edit summary |
||
(7 intermediate revisions by 2 users not shown) | |||
Line 5: | Line 5: | ||
| logo = [[Image:copy-logo-fraunhofer-itwm.gif]] | | logo = [[Image:copy-logo-fraunhofer-itwm.gif]] | ||
| developer = Fraunhofer ITWM | | developer = Fraunhofer ITWM | ||
| available on = [[ | | available on = [[Hawk | Hawk (HPE Apollo)]] | ||
| category = [[:Category:Communication libraries | Communication libraries]] | | category = [[:Category:Communication libraries | Communication libraries]] | ||
| license = commercial | | license = commercial | ||
Line 11: | Line 11: | ||
}} | }} | ||
=== Using GPI-2 on | === Using GPI-2 on Hawk === | ||
Load the necessary module. For example: | Load the necessary module. For example: | ||
<pre> | <pre> | ||
module load gpi2 | |||
module load | |||
</pre> | </pre> | ||
Line 32: | Line 31: | ||
if( gaspi_proc_init(GASPI_BLOCK) != GASPI_SUCCESS ){ | if( gaspi_proc_init(GASPI_BLOCK) != GASPI_SUCCESS ){ | ||
printf("gaspi_init failed !\n"); | |||
exit(-1); | exit(-1); | ||
} | } | ||
gaspi_version(&vers); | gaspi_version(&vers); | ||
gaspi_proc_rank(&rank); | gaspi_proc_rank(&rank); | ||
gaspi_proc_num(&tnc); | gaspi_proc_num(&tnc); | ||
printf("rank: %d tnc: %d (gpi2: %.2f ugni: %.2f)\n",rank,tnc,vers,vers_gni); | |||
if( gaspi_barrier(GASPI_GROUP_ALL,GASPI_BLOCK) != GASPI_SUCCESS ){ | if( gaspi_barrier(GASPI_GROUP_ALL,GASPI_BLOCK) != GASPI_SUCCESS ){ | ||
printf("gaspi_barrier failed !\n"); | |||
exit(-1); | exit(-1); | ||
} | } | ||
Line 57: | Line 54: | ||
<pre> | <pre> | ||
gcc -O2 hello_gpi2.c -D_GNU_SOURCE -lpmi -lugni -lrca -lGPI2 -o hello_gpi2.bin | |||
</pre> | </pre> | ||
=== Example to run the program | === Example to run the program === | ||
GPI-2 Applications should be started with one process per NUMA Socket. Use threads to exploit | |||
SMP parallelism on each NUMA Socket (e.g. mctp3 for best performance). Ex.: | |||
<pre> | <pre> | ||
# 2 nodes, | # 2 nodes, 128 cores (ht is enabled), interactive | ||
qsub -I -l | qsub -I -l select=2:node_type=rome:mpiprocs=128,walltime=00:05:00 | ||
# 4 procs on 2 nodes,socket affinity mask, ht on | # 4 procs on 2 nodes,socket affinity mask, ht on | ||
mpirun -np 8 ./hello_gpi2.bin | |||
</pre> | </pre> | ||
Latest revision as of 21:40, 18 August 2021
GPI-2 (Global address space Programming Interface) is a threadsafe PGAS API for Infiniband,ROCE,ETHERNET,GEMINI and ARIES networks.
GPI-2 aims at high performance, delivering wire-speed from the interconnect. It relies on one-sided and asynchronous communication that allow a perfect overlap between computation and communication. All GPI2 methods provide a timeout functionality for fault tolerant operation. With that in place, GPI-2 offers mechanisms that allow applications to react to failures and continue its execution. |
|
Using GPI-2 on Hawk
Load the necessary module. For example:
module load gpi2
Example
#include <stdio.h> #include <stdlib.h> #include <GASPI.h> int main(int argc,char *argv[]){ gaspi_rank_t rank,tnc; gaspi_float vers,vers_gni; char mtype[16]; if( gaspi_proc_init(GASPI_BLOCK) != GASPI_SUCCESS ){ printf("gaspi_init failed !\n"); exit(-1); } gaspi_version(&vers); gaspi_proc_rank(&rank); gaspi_proc_num(&tnc); printf("rank: %d tnc: %d (gpi2: %.2f ugni: %.2f)\n",rank,tnc,vers,vers_gni); if( gaspi_barrier(GASPI_GROUP_ALL,GASPI_BLOCK) != GASPI_SUCCESS ){ printf("gaspi_barrier failed !\n"); exit(-1); } gaspi_proc_term(GASPI_BLOCK); return 0; }
Compilation example
gcc -O2 hello_gpi2.c -D_GNU_SOURCE -lpmi -lugni -lrca -lGPI2 -o hello_gpi2.bin
Example to run the program
GPI-2 Applications should be started with one process per NUMA Socket. Use threads to exploit SMP parallelism on each NUMA Socket (e.g. mctp3 for best performance). Ex.:
# 2 nodes, 128 cores (ht is enabled), interactive qsub -I -l select=2:node_type=rome:mpiprocs=128,walltime=00:05:00 # 4 procs on 2 nodes,socket affinity mask, ht on mpirun -np 8 ./hello_gpi2.bin