Implementation-independent MPI Communication Library

There are several MPI library implementations, such as MPICH, OpenMP, and vendor-provided MPI. Even though clusters employ the same operating system and CPU architecture, the same execution code may not run on all those clusters. This is because the MPI standard does not specify the application binary interface (ABI) that defines all the data types and constant values. For example, as shown in Figure 1, code developed under the MPICH environment does not run under the OpenMPI environment. The user has to recompile the program under the OpenMPI environment. The right way to overcome this issue is to specify the ABI in MPI, but the ABI has not been standardized.

Figure 1. The MPI Runtime Issue

MPI-Adapter†

There are several MPI library implementations, such as MPICH, OpenMP, and vendor-provided MPI. Even though clusters employ the same operating system and CPU architecture, the same execution code may not run on all those clusters. This is because the MPI standard does not specify the application binary interface (ABI) that defines all the data types and constant values. For example, as shown in Figure 1, code developed under the MPICH environment does not run under the OpenMPI environment. The user has to recompile the program under the OpenMPI environment. The right way to overcome this issue is to specify the ABI in MPI, but the ABI has not been standardized.

Figure 2. MPI Adapter

 

†Shinji Sumimoto, Kohta Nakashima, Akira Naruse, Kouichi Kumon, Takashi Yasui, Yoshikazu Kamoshida, Hiroya Matsuba, Atsushi Hori and Yutaka Ishikawa, “The Design of a Seamless MPI Computing Environment for Commodity-based Clusters,” Euro PVM/MPI09, 2009.