Home

minimizers

Minimizers are points in a domain at which a function attains its smallest value. For a function f: X → R, a point x* ∈ X is a global minimizer if f(x*) ≤ f(x) for every x ∈ X. A local minimizer satisfies f(x*) ≤ f(x) for all x in some neighborhood of x*. If the inequality is strict for all x ≠ x*, x* is a strict minimizer. A function may have multiple minimizers, especially when the problem is non-convex. In convex optimization, every local minimizer is global, and if the function is strictly convex there is a unique minimizer.

In constrained problems, minimizers must satisfy not only the objective but also the constraints. Optimality conditions

Existence of minimizers is a foundational concern. A classic result (Weierstrass) states that if the domain

Minimizers arise in many fields, including mathematics, economics, engineering, and machine learning. In optimization algorithms, one

such
as
the
Karush-Kuhn-Tucker
(KKT)
conditions
for
differentiable
problems
with
inequality
constraints
and
Lagrange
multiplier
methods
for
equality
constraints
provide
practical
characterizations
of
minimizers.
X
is
compact
and
f
is
continuous,
a
global
minimizer
exists.
More
general
existence
results
rely
on
properties
like
coercivity
(f
tends
to
infinity
as
||x||
→
∞)
and
lower
semicontinuity,
which
ensure
the
attainment
of
minimum
values
in
broader
settings.
seeks
points
where
the
gradient
vanishes
(stationary
points)
or
where
subgradients
satisfy
optimality
conditions,
with
convexity
providing
stronger
guarantees
such
as
uniqueness
of
the
minimizer.