- Infos im HLRS Wiki sind nicht rechtsverbindlich und ohne Gewähr -
- Information contained in the HLRS Wiki is not legally binding and HLRS is not responsible for any damages that might result from its use -
Advisor: Difference between revisions
Line 95: | Line 95: | ||
export OMP_PROC_BIND=close | export OMP_PROC_BIND=close | ||
export MPI_OPENMP_INTEROP=1 | export MPI_OPENMP_INTEROP=1 | ||
One can then run Advisor, same as described in the above section. | |||
mpirun -np $num_of_mpi_tasks-1 ./a.out : -np 1 advixe-cl -collect survey -project-dir results_advisor ./a.out | |||
mpirun -np $num_of_mpi_tasks-1 ./a.out : -np 1 advixe-cl -collect tripcounts -flop -project-dir results_advisor ./a.out | |||
== See also == | == See also == |
Revision as of 19:16, 17 December 2021
Intel® Advisor XE is a threading assistant for C, C++, C# and Fortran. It guides developers through threading design, automating analyses required for fast and correct implementation.
It helps developers to add parallelism to their existing C/C++ or Fortran programs. You can use the Intel Advisor XE to:
|
|
Why Intel Advisor?
Before checking the parallel efficiency of an application, it is necessary to understand how the application behaves at the core level. For example,
- whether it is memory bound or compute bound
- how good is the vectorization
- how is the memory access pattern
- whether there are dependencies hindering vectorization
- where different loops/functions lie on the Roofline plot etc.
Intel Advisor not only provide answers to all the above-mentioned queries, but also suggests solutions, for example, what kind of optimizations one needs to implement in order to improve the performance of the application.
How to use Intel Advisor?
First compile your application with an additional flag "-g". Then, set up an environment for the Advisor by loading the corresponding module.
For example, on Hawk
module load advisor
On Vulcan
module load performance/advisor
If you have installed Intel oneAPI on your laptop then,
source /opt/intel/oneapi/setvars.sh
Running Advisor on OpenMP parallel application
Select the number of OpenMP threads as,
export OMP_NUM_THREADS=num_of_threads
and bind them as follows,
export OMP_PROC_BIND=spread
Afterwards, collect survey, tripcounts and flops as follows,
advixe-cl -collect survey -project-dir results_advisor ./a.out
advixe-cl -collect tripcounts -flop -project-dir results_advisor ./a.out
Now the results can be opened as follows
advixe-gui results_advisor/e000/e000.advixeexp
Visualizing results on Hawk could be slow, one may thus alternatively pack up all the results in a read-only file as follows
advixe-cl --snapshot --project-dir=results_advisor --cache-sources path_to_source_code --cache-binaries path_to_binary
Above command will create a file snapshot000.advixeexpz which can be easily copied to the local machine and can be viewed in GUI as below
advixe-gui snapshot000.advixeexpz
Running Advisor on MPI parallel application
Collect survey, tripcounts and flops as follows,
mpirun -np $num_of_mpi_tasks advixe-cl -collect survey -project-dir results_advisor ./a.out
mpirun -np $num_of_mpi_tasks advixe-cl -collect tripcounts -flop -project-dir results_advisor ./a.out
Above command will create Advisor reports for all the ranks. In case, one would like to run Advisor only on the single rank then do the following,
mpirun -np $num_of_mpi_tasks-1 ./a.out : -np 1 advixe-cl -collect survey -project-dir results_advisor ./a.out
mpirun -np $num_of_mpi_tasks-1 ./a.out : -np 1 advixe-cl -collect tripcounts -flop -project-dir results_advisor ./a.out
Running Advisor on MPI+OpenMP parallel application
The following example employs 32 MPI tasks distributed uniformly over both the sockets with 2 OpenMP threads per MPI tasks on a Hawk node.
module load mpt export MPI_SHEPHERD=1 export MPI_DSM_CPULIST=0-127/2:allhosts export OMP_NUM_THREADS=2 export OMP_PROC_BIND=close export MPI_OPENMP_INTEROP=1
One can then run Advisor, same as described in the above section.
mpirun -np $num_of_mpi_tasks-1 ./a.out : -np 1 advixe-cl -collect survey -project-dir results_advisor ./a.out
mpirun -np $num_of_mpi_tasks-1 ./a.out : -np 1 advixe-cl -collect tripcounts -flop -project-dir results_advisor ./a.out