Home

allgather

Allgather is a collective communication operation found in MPI and other parallel programming libraries. It gathers data from all processes and distributes the concatenation of all processes’ data to every process. After completion, each process holds a local copy of the entire data set, consisting of the per-process data blocks from all processes.

In its simplest form, allgather assumes each process contributes an equally sized block. The local input is

Implementation details include in-place variants, where a process uses its own receive buffer to hold incoming

Performance depends on message size, network latency, bandwidth, and topology. The operation is widely used to

broadcast
to
all
processes,
and
the
receive
buffer
on
each
process
contains
p
blocks,
where
p
is
the
number
of
processes.
For
heterogeneous
block
sizes,
MPI
provides
Allgatherv,
which
allows
each
process
to
specify
a
different
count
and
displacement.
data,
indicated
by
MPI_IN_PLACE
in
MPI.
Two
common
algorithms
are
ring
allgather
and
recursive-doubling
allgather.
Ring
allgather
performs
p-1
rounds,
each
process
sending
and
receiving
data
to
a
neighbor;
total
data
movement
is
(p-1)
times
the
block
size,
and
the
time
grows
roughly
linearly
with
p.
Recursive
doubling
(for
power-of-two
p)
uses
log2(p)
steps
with
data
growth
by
doubling
and
typically
achieves
lower
latency
for
large
p.
Other
approaches,
such
as
the
Bruck
algorithm,
trade
extra
memory
for
reduced
steps.
assemble
global
state,
such
as
distributed
arrays
or
matrices,
before
subsequent
computations.
In
MPI,
the
standard
interfaces
are
MPI_Allgather
and
MPI_Allgatherv.