How to build MVAPICH with Arm Compiler.
The MVAPICH2 software, based on the MPI 3.1 standard, delivers the best performance, scalability, and fault tolerance for high-end computing systems and servers using InfiniBand, Omni-Path, Ethernet/iWARP, and RoCE networking technologies. The MVAPICH2 software family is ABI compatible with the version of MPICH it is based on. For more information, see the MVAPICH website.
For the purposes of this build, the following components are used:
|Arm Compiler for HPC
|Operating system||RHEL 7.5|
Recipes for other versions of the application are available in the GitLab Packages Wiki.
Before you begin
- Install Arm Compiler. For more information, see our Installation instructions.
Install the following required OS packages:
These can be installed using the following command:
yum -y install <package name>
Arm recommends that you rebuild your MPI implementation (Open MPI, MVAPICH, MPICH) after each installation of a new version of Arm Compiler for HPC. This ensures that the Fortran interface incorporates any changes to the armflang module format, and that the correct run-time libraries are used.
Download and unpack the application source code:
tar ‑zxvf mvapich2‑2.3.1.tar.gz
Change into the unpacked directory:
Set an install location,
INSTALL_DIR, where MVAPICH will be installed to. For example:
/path/to/MVAPICH_installwith the path to your installation.
Set the compilers to use for the build:
export CC=armclang export CXX=armclang++ export FC=armflang
builddirectory and change into it:
mkdir build cd build
builddirectory, specifying the devices and protocols to build for, and the language interfaces to include:
Note: The default on Linux is OpenFabrics (OFA) IB/iWARP/RoCE with the CH3 channel. Explicitly select it with:
../configure --prefix=$INSTALL_DIR --with-device=ch3:mrail --with-rdma=gen2 --enable-cxx --enable-fc
Note: For more configuration options, see the MVAPICH user guide.
Build, test, and install MVAPICH, using:
make -j make install make testing
By default, MVAPICH enables CPU affinity. However, in the case of multi‑threaded programs, this can lead to poor thread placement and poor performance. It is recommended to disable CPU affinity in this case:
You must also disable CPU affinity if you want to over-subscribe the available cores with MPI tasks, for testing and development purposes.
MVAPICH supports the explicit control of the CPU and thread placement and binding policy via
mpirunprovides details on the mapping between CPUs and processes if
For more information, see the MVAPICH User Guide.