Home

CRC

A cyclic redundancy check (CRC) is a method used to detect accidental changes to raw data in digital networks and storage systems. It works by treating a block of data as a binary polynomial and dividing it by a fixed generator polynomial. The remainder of this division, expressed as a set of bits, is appended to the data as the CRC. When the data are later read or received, the same polynomial division is performed; if the remainder is zero (or matches an expected value after a final transformation), the data are considered error-free. CRCs are designed to catch common transmission and storage errors, including burst errors affecting multiple consecutive bits.

The CRC process relies on binary arithmetic over GF(2). The generator polynomial is chosen for a given

Common CRCs include CRC-32 and CRC-16 variants. CRC-32 (IEEE 802.3) uses a 32-bit polynomial (0x04C11DB7) and typical

CRCs are pervasive in data communications and storage due to their efficiency and strong but non-cryptographic

protocol
or
standard
and
has
a
degree
n,
producing
an
n-bit
CRC.
Different
configurations
use
different
polynomials,
initial
values,
and
final
XOR
values.
In
practice,
CRCs
can
be
implemented
in
software
or
hardware,
often
with
table-driven
or
bitwise
shift-register
approaches
to
speed
calculation.
The
choice
of
polynomial
and
configuration
defines
the
error-detection
capabilities,
such
as
the
types
and
likelihood
of
error
patterns
they
can
reliably
detect.
configurations
with
an
initial
value
of
all
ones
and
a
final
XOR
of
all
ones;
it
is
widely
used
in
Ethernet,
ZIP
archives,
and
other
data
formats.
CRC-16
variants
(for
example
CRC-16-CCITT
and
CRC-16-IBM)
use
16-bit
polynomials
and
are
used
in
protocols
and
storage
systems
where
smaller
checksums
are
sufficient.
While
CRCs
are
effective
for
detecting
accidental
errors,
they
are
not
designed
for
cryptographic
security.
error-detection
properties.