Home

hardnessofapproximation

Hardness of approximation is a subfield of computational complexity that studies how closely optimization problems can be approximated in polynomial time. For a minimization problem, an α-approximation algorithm guarantees a solution whose value is at most α times the optimum; for a maximization problem, the guarantee is at least α times the optimum. The central question is, for each problem, what is the largest α achievable by a polynomial-time algorithm, and what α is provably unattainable unless P=NP or similar complexity assumptions fail.

A core toolkit includes the PCP theorem and gap-introducing reductions, which convert decision problems into optimization

Key results illustrate the landscape of hardness of approximation. For Set Cover, under standard complexity assumptions,

These results establish fundamental limits on algorithmic performance for a wide range of optimization problems and

problems
with
a
promised
gap
between
satisfiable
and
partially
satisfiable
instances.
L-reductions
and
related
frameworks
help
preserve
approximation
factors
across
reductions,
enabling
broad
inapproximability
results.
no
polynomial-time
algorithm
can
achieve
better
than
(1
−
o(1))
ln
n,
while
the
greedy
algorithm
achieves
a
guarantee
of
ln
n
+
O(1).
Vertex
Cover
admits
a
simple
2-approximation,
yet
it
is
NP-hard
to
approximate
within
a
factor
of
about
1.3606.
Max-Cut
has
a
hardness
bound
of
16/17,
even
though
the
Goemans–Williamson
algorithm
achieves
a
0.878…
approximation
ratio.
Max-3-SAT
and
related
problems
are
hardened
to
constant-factor
approximations,
such
as
distinguishing
completely
satisfiable
instances
from
those
that
satisfy
only
a
7/8
fraction
of
clauses.
motivate
ongoing
research
into
tighter
bounds,
refined
reductions,
and
alternate
computational
models.