Home

Spinlocks

Spinlocks are a synchronization primitive used to protect shared data in multi-threaded or multi-core systems. They rely on busy-waiting: a thread repeatedly checks a lock flag until it becomes available, instead of sleeping or yielding the processor.

Acquisition is typically implemented with atomic operations such as test-and-set or compare-and-swap. A thread attempting to

Variants of spinlocks address different concerns. Simple spinlocks use a single boolean flag and can cause

Advantages of spinlocks include low overhead for very short critical sections and the avoidance of context

Implementation considerations include the use of atomic primitives, memory ordering guarantees, and, in interrupt contexts, disabling

acquire
the
lock
spins
until
it
observes
the
lock
as
free,
then
atomically
sets
it
to
held
and
proceeds
into
the
critical
section.
Releasing
the
lock
clears
the
flag
and
may
include
memory
barriers
to
ensure
proper
ordering
of
memory
operations.
cache
line
bouncing
under
contention.
Ticket
spinlocks
grant
access
in
FIFO
order,
providing
fairness
but
with
additional
overhead.
Queue-based
spinlocks,
such
as
MCS
or
CLH
locks,
place
waiting
threads
in
a
local
queue,
reducing
cache
contention
and
improving
scalability
on
many-core
systems.
switches
or
sleeping
in
environments
where
blocking
is
expensive
or
disallowed,
such
as
some
kernels
or
real-time
systems.
Disadvantages
include
wasted
CPU
cycles
during
contention,
potential
starvation
or
unfairness
in
some
variants,
cache
thrashing,
and
poor
performance
for
long-held
locks
or
highly
contended
data.
They
are
generally
unsuitable
for
user-space
threads
that
may
sleep,
or
for
workloads
with
long
critical
sections.
interrupts
to
prevent
deadlock.
Backoff
strategies
are
often
employed
to
reduce
contention.
Spinlocks
remain
common
in
low-level
systems
and
performance-critical
paths
where
blocking
is
undesirable.