Home

MPIs

MPIs refer to the implementations of the Message Passing Interface (MPI) standard, the de facto API for programming distributed-memory parallel computers. The MPI standard, maintained by the MPI Forum, specifies a portable set of routines that enable processes to communicate and synchronize across hardware and networks.

The MPI model comprises parallel processes that communicate through communicators, ranks, and tags. It provides point-to-point

Prominent MPIs include MPICH, Open MPI, MVAPICH, and Intel MPI. These implementations aim for standards conformance

Applications are typically written in C, C++, or Fortran and follow the single program, multiple data model.

MPI has evolved through major revisions (MPI-1, MPI-2, MPI-3, and MPI-4), adding features such as dynamic process

In practice, MPIs underpin much of high-performance computing on clusters and supercomputers. They offer portability across

communication
via
MPI_Send
and
MPI_Recv,
and
a
rich
set
of
collective
operations
such
as
MPI_Bcast,
MPI_Scatter,
MPI_Gather,
MPI_Reduce,
and
MPI_Alltoall.
Non-blocking
variants
(MPI_Isend,
MPI_Irecv)
and
persistent
calls
support
overlap
of
computation
and
communication.
Derived
data
types,
groups,
and
topologies
help
optimize
data
handling
and
process
layout.
and
interoperability,
and
run
on
a
wide
range
of
networks
and
hardware.
Programs
call
MPI_Init
at
start
and
MPI_Finalize
at
end,
determine
their
rank
and
the
communicator
size
with
MPI_Comm_rank
and
MPI_Comm_size,
and
perform
communication
accordingly.
management,
enhanced
I/O,
neighborhood
collectives,
and
improved
one-sided
communication.
platforms
but
introduce
complexity
and
overhead;
performance
depends
on
library
quality,
network
characteristics,
and
program
design,
and
many
deployments
combine
MPI
with
shared-memory
models
like
OpenMP.