Home

gigaflops

GigaFLOPS, short for giga floating-point operations per second, is a unit of computational speed used to express how many billions of floating-point operations a processor can perform each second. The term is commonly abbreviated as GFLOPS and is used for CPUs, GPUs, accelerators, and whole-system performance. One GFLOPS equals 1 x 10^9 floating-point operations per second.

FLOPS measure capability in peak theoretical terms; actual performance depends on the workload. In many definitions,

GFLOPS gained prominence as processors progressed from millions to billions of floating-point operations per second. Early

Examples: a processor rated at 4 GFLOPS peak can perform up to four billion single-precision floating-point

GFLOPS remains a widely used, though imperfect, metric for comparing computational speed across architectures, particularly when

a
fused
multiply-add
(FMA)
counts
as
two
floating-point
operations;
however
some
benchmarks
count
it
as
a
single
operation.
Peak
GFLOPS
can
be
estimated
as
the
product
of
the
number
of
floating-point
cores,
the
clock
frequency,
and
the
number
of
operations
per
cycle.
Real
sustained
GFLOPS
are
often
substantially
lower
due
to
memory
bandwidth,
latency,
and
software
efficiency.
systems
measured
in
megaflops
to
gigaflops;
by
the
2000s,
GPUs
and
CPUs
reported
tens
or
hundreds
of
GFLOPS.
In
the
2010s
and
beyond,
teraflops
became
common,
but
GFLOPS
remains
in
use
for
historical
comparisons
and
for
smaller-scale
systems.
operations
per
second
under
ideal
conditions.
Depending
on
precision
(single
vs
double)
and
instruction
mix,
the
actual
throughput
varies.
Benchmarks
such
as
LINPACK
or
SPEC
FP
are
used
to
estimate
GFLOPS
for
comparability.
discussing
raw
arithmetic
throughput
rather
than
overall
system
performance.