Home

F32

F32 refers to a 32-bit floating-point numeric type commonly used in computing. It is usually synonymous with IEEE 754 single-precision and is also known as float32 or simply float in many programming environments. It is designed to balance range, precision, and memory usage for numeric computations.

A 32-bit float is organized into three fields: 1 sign bit, 8 exponent bits, and 23 fraction

Special values and subnormal numbers are defined for edge cases. If the exponent is all zeros and

F32 is widely used in performance-sensitive contexts where memory bandwidth and speed are priorities, such as

(significand)
bits.
The
exponent
uses
a
bias
of
127,
and
normalized
numbers
have
an
implicit
leading
1
before
the
binary
point.
This
structure
enables
a
wide
range
of
values,
from
very
small
to
very
large,
with
approximately
seven
decimal
digits
of
precision.
the
fraction
is
nonzero,
the
value
is
a
subnormal
number.
If
the
exponent
is
all
zeros
and
the
fraction
is
zero,
the
value
is
zero.
If
the
exponent
is
all
ones,
the
value
is
either
infinity
(finite
exponent)
or
NaN
(not
a
number)
for
signaling
or
quiet
NaNs.
The
finite
range
is
approximately
from
±1.17549435
×
10^-38
(minimum
normal)
up
to
±3.4028235
×
10^38,
with
a
precision
of
about
seven
decimal
digits.
graphics
processing,
machine
learning
inference,
and
scientific
computing.
It
is
supported
across
major
CPUs
and
GPUs
and
is
commonly
exposed
in
high-level
languages
and
libraries
as
float32,
float,
or
similar
typedefs.
For
comparison,
double-precision
(float64)
offers
greater
range
and
accuracy
at
the
cost
of
memory
and
computation.
See
also
IEEE
754
and
related
floating-point
formats
like
float16.