You copied the Doc URL to your clipboard.

D MPI distribution notes

This appendix has brief notes on many of the MPI distributions supported by Arm Performance Reports.

Advice on settings and problems particular to a distribution are given here.

D.1 Bull MPI

Bull X-MPI is supported.

D.2 Cray MPT

Arm Performance Reports users may wish to read 4.1.3 Static linking on Cray X-Series Systems.

Arm Performance Reports has been tested with Cray XK7 and XC30 systems.

Arm Performance Reports requires Arm's sampling libraries to be linked with the application before running on this platform.

See 4.1.1 Linking for a set-by-step guide.

Arm supplies module files in REPORTS_INSTALLATION_PATH/share/modules/cray.

See 4.1.5 Dynamic and static linking on Cray X-Series systems using the modules environment to simplify linking with the sampling libraries.

Known Issues:

  • By default scripts wrapping Cray MPT will not be detected, but you can force the detection by setting the ALLINEA_DETECT_APRUN_VERSION environment variable to "yes" before starting Performance Reports.

D.3 Intel MPI

Arm Performance Reports has been tested with Intel MPI 4.1.x, 5.0.x and onwards.

Known Issue: If you use Spectrum LSF as workload manager in combination with Intel MPI and you get for example one of the following errors:

  • ¡target program¿ exited before it finished starting up. One or more processes were killed or died without warning
  • ¡target program¿ encountered an error before it initialised the MPI environment. Thread 0 terminated with signal SIGKILL

or the job is killed otherwise during launching then you may need to set/export I_MPI_LSF_USE_COLLECTIVE_LAUNCH=1 before executing Arm Performance Reports. See Using IntelMPI under LSF quick guide and Resolve the problem of the Intel MPI job …hang in the cluster for more details.


If you see the error undefined reference to MPI_Status_c2f during initialization or if manually building the sampling libraries as described in 4.1.1 Linking, then you need to rebuild MPICH 2 with Fortran support.


MPICH 3.0.3 and 3.0.4 do not work with Arm Performance Reports due to an MPICH bug. MPICH 3.1 addresses this and is supported.

D.6 Open MPI

Arm Performance Reports products have been tested with Open MPI 1.6.x, 1.8.x, 1.10.x, 2.0.x, and 3.0.x.

The following versions of Open MPI do not work with Arm Performance Reports because of bugs in the Open MPI debug interface:

  • Open MPI 2.1.0 to 2.1.2.
  • Open MPI 3.0.0 when compiled with the Arm Compiler for HPC on Arm®;v8 (AArch64) systems.
  • Open MPI 3.0.x when compiled with some versions of the GNU compiler on Arm®;v8 (AArch64) systems.
  • Open MPI 3.x when compiled with some versions of IBM XLC/XLF or PGI compilers on IBM Power (PPC64le little-endian, POWER8, or POWER9) systems.
  • Open MPI 3.1.0 and 3.1.1.
  • Open MPI 3.x with any version of PMIx ¡ 2.
  • Open MPI 4.0.1 with PMIx 3.1.2.

To resolve any of the above issues, instead select Open MPI (Compatibility) for the MPI Implementation.

D.6.1 Open MPI 3.x on IBM Power with the GNU compiler

To use Open MPI versions 3.0.0 to 3.0.4 (inclusive) and Open MPI versions 3.1.0 to 3.1.3 (inclusive) with the GNU compiler on IBM Power systems, you might need to configure the Open MPI build with CFLAGS=-fasynchronous-unwind-tables. Configuring the Open MPI build with CFLAGS=-fasynchronous-unwind-tables fixes a startup bug where Arm Performance Reports is unable to step out of MPI_Init into your main function. The startup bug occurs because of missing debug information and optimization in the Open MPI library. If you already configure with -g, you do not need to add this extra flag. An example configure command is:

    ./configure --prefix=/software/openmpi-3.1.2 CFLAGS=-fasynchronous-unwind-tables

If you do not have the option to recompile your MPI, an alternative workaround is to select Open MPI (Compatibility) for the MPI Implementation. This issue is fixed in later versions.

D.7 Platform MPI

Platform MPI 9.x is supported, but only with the mpirun command. Currently mpiexec is not supported.

D.8 SGI MPT / SGI Altix

SGI MPT 2.10+ is supported.

Some SGI systems can not compile programs on the batch nodes, for example because the gcc package is not installed.

If this applies to your system you must explicitly compile the Arm MPI wrapper library using the make-profiler-libraries command and then explicitly link your programs against the Arm profiler and sampler libraries.

The mpio.h header file shipped with SGI MPT 2.10 contains a mismatch between the declaration of MPI_File_set_view and some other similar functions and their PMPI equivalents, for example PMPI_File_set_view. This prevents Arm Performance Reports from generating the MPI wrapper library. Please contact SGI for a fix.

If you are using SGI MPT with SLURM and would normally use mpiexec_mpt to launch your program you will need to use srun --mpi=pmi2 directly.

Preloading the Arm profiler and MPI wrapper libraries is not supported in Express Launch mode. Arm recommends you explicitly link your programs against these libraries to work around this problem. If this is not possible you can manually compile the MPI wrapper, and explicitly set LD_PRELOAD in the launch line.


The use of the --export argument to srun (SLURM 14.11 or newer) is not supported. In this case you can avoid using --export by exporting the necessary environment variables before running Arm Performance Reports.

The use of the --task-prolog argument to srun (SLURM 14.03 or older) is also not supported, as the necessary libraries cannot be preloaded. You will either need to avoid using this argument, or explicitly link to the libraries.