Home

congestioncontrol

Congestion control is a family of mechanisms used in computer networks to regulate the traffic entering a network so that resources such as bandwidth and buffers are not overwhelmed. The goal is to keep networks utilized efficiently while keeping packet loss, delays, and jitter within acceptable bounds. Congestion control works by adjusting the rate at which a sender transmits data based on observed network conditions, typically using feedback from the network or inferred signals.

Most transport-layer congestion control schemes are end-to-end and operate on the sender's congestion window (cwnd). Early

Congestion control can be implemented as end-to-end, relying on receiver feedback (ACKs), or with network-assisted mechanisms

Common examples include TCP variants such as Tahoe, Reno, and NewReno, and more recent implementations like

Congestion control is also employed by other transport protocols such as QUIC and SCTP. Challenges include

designs
used
slow
start
and
congestion
avoidance,
applying
an
additive
increase/multiplicative
decrease
(AIMD)
rule:
increase
cwnd
gradually
when
the
path
appears
uncongested,
and
reduce
it
sharply
when
congestion
is
detected,
e.g.,
via
packet
loss.
Modern
schemes
refine
these
ideas
and
may
include
delay-based
components
or
explicit
congestion
notification.
such
as
Explicit
Congestion
Notification
(ECN)
and
Active
Queue
Management
(AQM)
in
routers.
AQM
schemes
like
RED
and
more
modern
approaches
aim
to
preempt
congestion
by
signaling
congestion
before
queues
fill,
reducing
packet
loss
and
controlling
latency.
CUBIC
and
Google's
BBR,
which
use
different
models
to
estimate
available
bandwidth
and
RTT.
bufferbloat,
unfairness
between
flows
with
different
RTTs,
interaction
with
wireless
links,
and
ensuring
performance
across
diverse
network
conditions.
Ongoing
work
combines
end-to-end
strategies
with
network-assisted
feedback
to
improve
robustness
and
efficiency.