Home

FLOPS

FLOPS stands for floating-point operations per second and is a unit of computing performance that measures how many floating-point arithmetic operations a processor can perform each second. A single floating-point operation is a FLOP, and FLOPS refers to the rate of performing such operations. The exact counting of what constitutes a FLOP can vary; an addition or multiplication is typically one FLOP, while a fused multiply-add (FMA) may be counted as two FLOPs or as one, depending on the convention used.

FLOPS are commonly expressed with prefixes such as MFLOPS, GFLOPS, TFLOPS, and PFLOPS to reflect different scales

In practice, achieved FLOPS are typically lower than the theoretical peak due to memory bandwidth, latency,

of
performance.
Theoretical
peak
FLOPS
are
estimated
from
a
processor’s
clock
rate
and
its
parallel
execution
resources,
such
as
vector
width
and
the
number
of
execution
units.
For
example,
a
core
that
can
perform
multiple
floating-point
operations
per
cycle
across
a
wide
vector
unit
at
a
given
clock
rate
yields
a
higher
peak
FLOPS
figure.
and
algorithmic
efficiency.
Real-world
performance
is
often
assessed
with
benchmarks
like
LINPACK
or
SPECfp,
which
measure
sustained
FLOPS
under
representative
workloads
rather
than
peak
capability.
FLOPS
remain
a
standard
measure
of
raw
arithmetic
capacity,
especially
in
high-performance
computing,
but
comparisons
require
specifying
the
precision
(single
vs.
double),
the
counting
convention,
and
whether
the
figure
reflects
peak
or
sustained
performance.