Home

threadperconnection

Thread-per-connection is a concurrency model used by some network servers in which each incoming client connection is handled by a dedicated operating system thread. In this model, a new thread is created when a connection is accepted and the thread persists for the duration of the client session, performing blocking I/O within that thread. This provides a simple programming style, since code can use blocking calls and local per-connection state without explicit multiplexing or callback wiring.

Advantages include straightforward development, strong isolation between connections, and natural use of multi-core CPUs for parallel

Disadvantages include poor scalability for large numbers of concurrent connections, as each thread consumes memory (including

Alternatives and variants include thread pools (reusing a limited number of threads to service many connections),

Usage notes: once common in early web servers and daemon processes on platforms with robust threading support,

request
handling.
It
is
well
suited
for
servers
with
moderate
concurrency
and
workloads
where
blocking
I/O
is
acceptable,
and
can
be
easy
to
reason
about
for
developers
new
to
asynchronous
programming.
stack
space)
and
incurs
context-switching
costs.
Resource
limits
on
thread
creation
and
scheduling
can
become
bottlenecks,
and
long-lived
connections
or
bursts
of
activity
can
exhaust
system
resources.
It
tends
to
be
less
efficient
on
high-latency
networks
or
workloads
with
many
simultaneous
clients.
asynchronous
I/O
and
non-blocking
sockets
with
event-driven
or
reactor
patterns,
and
hybrid
models
that
combine
threads
with
events
to
improve
scalability
while
preserving
a
blocking-style
code
path.
many
modern
systems
prefer
event-driven
or
asynchronous
approaches
to
scale
to
thousands
of
concurrent
connections.
See
also:
blocking
I/O,
asynchronous
I/O,
event-driven
programming,
thread
pool,
reactor
pattern.