Home

MFLOP

MFLOP stands for million floating-point operations per second, a unit used to express the rate at which a computer can perform floating-point arithmetic. A floating-point operation is an arithmetic calculation performed on floating-point numbers, typically including addition, subtraction, multiplication, and division. Depending on counting conventions, a fused multiply-add (FMA) may be counted as one or two operations, which can affect reported MFLOP values.

Measurement of MFLOPS relies on benchmarks that specify which operations are counted and under what precision.

Historically, MFLOPS were widely used in the 1980s and 1990s to characterize the performance of vector and

Practical considerations include the counting method and the impact of modern features such as fused operations,

Common
benchmarks
include
linear
algebra
workloads
and
synthetic
tests
designed
to
stress
floating-point
units.
Reported
MFLOPS
can
reflect
peak
theoretical
performance,
real-world
achieved
performance,
or
specific
workload
performance,
and
results
vary
with
data
type
(single
vs
double
precision),
compiler
optimizations,
and
system
architecture.
supercomputers.
As
computing
power
grew,
the
unit
evolved
into
GFLOPS
(billion
FLOPS)
and
later
higher
orders,
with
TFLOPS
(trillion
FLOPS)
becoming
common
for
modern
systems.
MFLOPS
remain
cited
in
educational
contexts
and
can
appear
in
discussions
of
older
hardware
or
specific
embedded
systems.
parallelism,
and
multi-core
or
many-core
architectures,
all
of
which
influence
how
floating-point
performance
is
reported.
Users
interpreting
MFLOP
figures
should
consider
the
benchmark,
precision,
and
counting
rules
to
make
meaningful
comparisons.