Home

optimiser

An optimiser, or optimization algorithm, is a procedure designed to find the values of decision variables that maximise or minimise an objective function, subject to constraints. Optimisers are used across mathematics, engineering, economics, and computer science to improve performance, cost, efficiency, or quality. In practice, problems are defined by an objective function f(x) to be optimised, a set of decision variables x, and possibly constraints g_i(x) ≤ b_i or h_j(x) = c_j.

Problems can be unconstrained or constrained, continuous or discrete, and may seek a global optimum or a

Common deterministic methods include gradient-based algorithms (gradient descent, Newton's method, quasi-Newton, conjugate gradient) for smooth problems,

In computing, optimisers appear in compilers and runtime systems to improve code speed or memory usage, and

Choosing an optimiser involves considering problem structure, noise, constraints, desired guarantees, and computational resources. Outcomes include

local
one.
Special
cases
include
linear
programming,
convex
optimisation,
nonlinear
programming,
integer
programming,
and
combinatorial
optimisation.
Exact
methods
guarantee
optimality
under
the
model
assumptions;
they
may
be
impractical
for
large
or
complex
problems,
in
which
case
heuristic
or
approximate
methods
are
used.
and
the
simplex
method
or
interior-point
methods
for
linear
or
convex
problems.
Stochastic
and
metaheuristic
approaches,
such
as
genetic
algorithms,
simulated
annealing,
or
tabu
search,
explore
the
search
space
more
broadly
but
without
guarantees
of
global
optimality.
in
machine
learning
to
tune
model
parameters
via
loss
minimisation.
ML
optimisers
such
as
stochastic
gradient
descent,
Adam,
or
RMSProp
adjust
parameters
iteratively
based
on
gradients
and
learning
rates.
the
optimal
value,
optimal
solution,
sensitivity
to
changes,
and
computational
cost.