Home

MPIAllgatherv

MPI_Allgatherv is a collective communication operation in the Message Passing Interface (MPI) that collects varying amounts of data from all processes and distributes the combined data to all processes. Unlike MPI_Allgather, each process can contribute a different number of elements, enabling nonuniform data distributions.

In operation, every process sends sendcount elements of type sendtype from its sendbuf. All processes receive

Signature (C): MPI_Allgatherv(const void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, const int recvcounts[], const int

a
buffer
recvbuf
that
holds
the
total
data,
which
is
the
sum
of
all
per-process
contributions.
The
recvcounts
array
specifies,
for
each
rank
r,
how
many
elements
from
rank
r
are
received,
and
the
displs
array
gives
the
displacement
(in
elements
of
recvtype)
in
recvbuf
where
the
data
from
rank
r
should
be
placed.
The
resulting
receive
buffer
on
every
process
contains
the
concatenation
of
data
from
ranks
in
ascending
order,
first
from
rank
0,
then
1,
and
so
on.
The
total
number
of
received
elements
must
equal
sum
of
recvcounts,
and
the
datatype
of
the
received
data
is
recvtype
(which
may
differ
from
sendtype).
displs[],
MPI_Datatype
recvtype,
MPI_Comm
comm).
Fortran
uses
corresponding
MPI_ALLGATHERV,
with
parameters
adapted
to
Fortran
conventions.
The
call
is
collective
and
must
be
invoked
by
all
processes
in
comm
with
matching
parameters.
The
operation
is
commonly
used
when
processes
produce
differing
amounts
of
data
that
must
be
shared
with
all
participants.
Nonblocking
variants
exist,
such
as
MPI_Iallgatherv,
for
overlap
with
computation.