Home

CPUscheduling

CPUScheduling, commonly written as CPU scheduling, is the set of techniques used by an operating system to decide which process should run on the central processing unit at any given time. The goal is to maximize overall system performance and responsiveness while ensuring fair access to CPU time. Scheduling decisions occur when a process becomes ready, blocks on I/O, or when a running process is preempted.

A scheduler selects a process from the ready queue and transfers control to it, an operation that

Common scheduling criteria include CPU utilization, throughput, average turnaround time, average waiting time, response time, and

Common algorithms: First-Come, First-Served (non-preemptive, simple but can yield long waits); Shortest Job First / Shortest Remaining

In practice, OS implementations combine strategies and tune parameters; performance depends on workload, burst time estimates,

requires
a
context
switch.
Scheduling
is
influenced
by
workload
characteristics,
such
as
CPU-bound
vs
I/O-bound
processes,
and
by
the
available
hardware
resources.
Preemption
is
common
in
time-sharing
systems
and
reduces
the
waiting
time
of
interactive
processes,
at
the
cost
of
context-switch
overhead.
fairness.
Some
criteria
conflict,
so
systems
balance
them
and
may
introduce
aging
to
prevent
starvation.
Time
(minimizes
average
wait
but
requires
future
burst
estimates);
Priority
scheduling
(preemptive
or
non-preemptive;
can
cause
starvation
without
aging);
Round
Robin
(preemptive
with
fixed
time
quantum,
good
for
interactivity);
Multilevel
queues
and
Multilevel
feedback
queues
(hierarchical
schemes
with
different
policies).
Real-time
systems
may
use
Rate
Monotonic
or
Earliest
Deadline
First,
which
provide
guarantees
for
hard
deadlines.
and
the
cost
of
context
switches.
Starvation
and
aging
issues
require
mitigation.