Home

OutofOrderExecution

Out-of-Order Execution (OOE) is a computer architecture technique used in many modern CPUs to increase instruction throughput by executing instructions as their operands become available, rather than strictly in program order. The goal is to keep execution units busy and to maximize instruction-level parallelism while preserving the original program semantics.

The processor fetches and decodes instructions, then issues them to reservation stations or an execution unit

Speculative execution and aggressive branch prediction fill the pipeline with work by predicting the path of

OOE improves instruction-level parallelism and overall throughput but increases microarchitectural complexity, power consumption, and die area.

Historically, out-of-order execution traces to dynamic scheduling ideas from Tomasulo’s algorithm in the 1960s and was

pool.
A
reorder
buffer,
along
with
register
renaming,
tracks
in-flight
instructions
and
resolves
data
hazards
by
allowing
later
instructions
to
proceed
if
their
operands
are
ready.
The
results
are
kept
tentatively
and
later
committed
in
program
order,
ensuring
correct
architectural
state
even
if
execution
occurred
out
of
order.
Memory
accesses
use
buffers
to
preserve
correct
memory
ordering.
branches
and
executing
instructions
ahead
of
time.
If
predictions
are
correct,
performance
is
improved;
if
not,
the
speculative
results
are
discarded
and
the
machine
state
is
rolled
back
to
the
last
known
good
state.
It
also
interacts
with
memory
ordering
and
security
concerns,
notably
side-channel
vulnerabilities
exposed
by
speculative
execution
in
recent
years.
implemented
in
commercial
CPUs
from
the
1990s
onward,
becoming
a
standard
feature
in
many
superscalar
processors
such
as
Intel
and
AMD
designs.