Home

MVAPICH2

MVAPICH2 is a high-performance Message Passing Interface (MPI) library designed for cluster computing over high-speed interconnects such as InfiniBand, RoCE, iWARP, and Omni-Path. It provides robust MPI-1, MPI-2, and many MPI-3 features with an emphasis on low latency and high bandwidth for scalable applications. The project originated at The Ohio State University in collaboration with Mellanox Technologies and is the successor to the original MVAPICH project. It is distributed as open-source software under a BSD-style license and is widely deployed on Linux clusters in academic and research contexts.

Architecture and features: MVAPICH2 uses a transport-aware communication substrate that supports InfiniBand verbs and other network

Platform support and usage: MVAPICH2 runs on Linux and supports x86 and a variety of other architectures.

APIs,
with
a
modular
CH4-based
path
and,
historically,
the
CH3
backend.
It
implements
RDMA-based
point-to-point
and
collective
operations,
supports
GPU-aware
communication
via
MVAPICH2-GDR
for
CUDA-enabled
systems,
and
offers
features
such
as
multi-rail
traffic,
asynchronous
progress,
and
optional
fault-tolerance
extensions
in
some
variants.
It
aims
to
maximize
data
throughput
and
minimize
CPU
overhead
by
offloading
communication
to
network
hardware
when
possible.
It
is
commonly
packaged
with
distribution
tools
and
loaded
via
environment
modules;
users
compile
MPI
applications
with
mpicc
and
run
with
mpirun
or
mpiexec.
It
also
includes
variants
tailored
to
specific
networks
and
is
used
in
many
national
labs
and
HPC
centers
for
large-scale
simulations,
data
analysis,
and
scientific
workloads.