- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -
Advisor: Difference between revisions
No edit summary |
No edit summary |
||
Line 15: | Line 15: | ||
* predict the approximate parallel performance characteristics of the proposed parallel code regions. | * predict the approximate parallel performance characteristics of the proposed parallel code regions. | ||
* check for data sharing problems that could prevent the application from working correctly when parallelized. | * check for data sharing problems that could prevent the application from working correctly when parallelized. | ||
====== For the last three points, one may refer [https://kb.hlrs.de/platforms/upload/Tutorial_2013_Advisor.pdf here]. ====== | |||
| logo = [[Image:intel-logo.png]] | | logo = [[Image:intel-logo.png]] | ||
| developer = Intel | | developer = Intel |
Revision as of 00:01, 18 January 2022
Intel® Advisor XE is a low-weight threading assistant for C, C++, C# and Fortran. It guides developers through threading design, automating analyses required for fast and correct implementation.
It helps developers to add parallelism to their existing C/C++ or Fortran programs. Apart from thread parallelism, Advisor also supports analyzing MPI-parallel applications. The overall efficiency of an MPI-parallel loop/function can be measured by manually adding individual bandwidths and performances. For example, one runs an application on two MPI ranks, attaching Advisor to both the MPI tasks. Let us assume that the bandwidths corresponding to rank-1 and rank-2 turn out to be X1 GB/sec, X2 GB/sec and performances Y1 GF/sec, Y2 GF/sec, respectively. Then, the total bandwidth would be (X1+X2) GB/sec and total performance (Y1+Y2) GF/sec. If the values of X1, X2 or Y1,Y2 are significantly different, then there is a load imbalance between the ranks. If the load balance among MPI-ranks is good, running Advisor on a single rank would be enough. In summary, one may use the Intel Advisor XE to:
For the last three points, one may refer here. |
|
Why Intel Advisor?
Before checking the parallel efficiency of an application, it is necessary to understand how the application behaves at the core and node level. For example,
- whether it is memory bound or compute bound
- how good is the vectorization
- how is the memory access pattern
- whether there are dependencies hindering vectorization
- where different loops/functions lie on the Roofline plot etc.
Intel Advisor not only provide answers to all the above-mentioned queries, but also suggests solutions, for example, what kind of optimizations one needs to implement in order to improve the performance of the application.
A general introduction about the Advisor can be found here.
For the memory access pattern, one may refer here.
Further detail about vectorization and dependency can be found here.
How to use Intel Advisor?
First, compile your application with the flag "-g" followed by other optimization flags, for example on Hawk "-O2 (or -O3) -march=core-avx2". Then, set up an environment for the Advisor by loading the corresponding module.
For example, on Hawk
module load advisor
On Vulcan
module load performance/advisor
If you have installed Intel oneAPI on your laptop then,
source /opt/intel/oneapi/setvars.sh
Running Advisor on OpenMP parallel application
Select the number of OpenMP threads as,
export OMP_NUM_THREADS=num_of_threads
and bind them as,
export OMP_PROC_BIND=spread
Afterwards, collect survey, tripcounts and flops as follows,
advixe-cl -collect survey -project-dir results_advisor ./a.out
advixe-cl -collect tripcounts -flop -project-dir results_advisor ./a.out
Here, survey is an internal tool which locates non-vectorized and poorly vectorized loops/functions and estimates performance gain with efficient vectorization. Trip counts introduces counters to measure time spent in a particular loop/function, and the flag “-flop” enables the flop counter.
Results can be visualized using Advisor GUI,
advixe-gui results_advisor/e000/e000.advixeexp
Visualizing results on Hawk could be slow, one may thus alternatively pack up all the results in a read-only file as follows
advixe-cl --snapshot --project-dir=results_advisor --cache-sources path_to_source_code --cache-binaries path_to_binary
Above command will create a file snapshot000.advixeexpz which requires very less memory as compared to the original results_advisor directory can be easily copied to the local machine. The file can be viewed in GUI as,
advixe-gui snapshot000.advixeexpz
Running Advisor on MPI parallel application
Collect survey, tripcounts and flops as follows,
mpirun -np $num_of_mpi_tasks advixe-cl -collect survey -project-dir results_advisor ./a.out
mpirun -np $num_of_mpi_tasks advixe-cl -collect tripcounts -flop -project-dir results_advisor ./a.out
Above command will create Advisor reports for all the ranks. In case, one would like to run Advisor only on the single rank, then do the following,
mpirun -np $num_of_mpi_tasks-1 ./a.out : -np 1 advixe-cl -collect survey -project-dir results_advisor ./a.out
mpirun -np $num_of_mpi_tasks-1 ./a.out : -np 1 advixe-cl -collect tripcounts -flop -project-dir results_advisor ./a.out
Running Advisor on MPI parallel application
Running Advisor on MPI+OpenMP parallel application
The following example employs 32 MPI tasks distributed uniformly over both the sockets with 2 OpenMP threads per MPI tasks on a Hawk node.
module load mpt export MPI_SHEPHERD=1 export MPI_DSM_CPULIST=0-127/2:allhosts export OMP_NUM_THREADS=2 export OMP_PROC_BIND=close export MPI_OPENMP_INTEROP=1
One can then run Advisor, same as described in the above section.
mpirun -np $num_of_mpi_tasks-1 ./a.out : -np 1 advixe-cl -collect survey -project-dir results_advisor ./a.out
mpirun -np $num_of_mpi_tasks-1 ./a.out : -np 1 advixe-cl -collect tripcounts -flop -project-dir results_advisor ./a.out
Additional analysis - memory access pattern and dependencies
While visualizing the results, Advisor might suggest performing additional analysis like memory access pattern and dependencies. One may collect the same, for example, as follows,
mpirun -np $num_of_mpi_tasks-1 ./a.out : -np 1 advixe-cl -collect map -project-dir results_advisor ./a.out
mpirun -np $num_of_mpi_tasks-1 ./a.out : -np 1 advixe-cl -collect dependencies -project-dir results_advisor ./a.out
Note that above analysis is possible only after collecting survey and tripcounts.