Home

metaoptimization

Metaoptimization refers to the study and practice of optimizing the optimization process itself. It treats the choice of algorithms, their configurations, and the overall procedure as the object of optimization, rather than the problem’s objective function alone. In this view, a solver or learning model is embedded within a higher-level search that seeks to improve performance across a family of problems or tasks.

The field encompasses several related tasks. Algorithm configuration aims to tune the parameters of a given

Common methods include Bayesian optimization, sequential model-based optimization, and evolutionary algorithms; plus specialized tools such as

Applications span AutoML and neural architecture search, solver configuration for SAT/SMT and mixed-integer programs, and broader

Challenges include computational cost, risk of overfitting to benchmark sets, reproducibility, and transferability of tuned configurations

optimizer.
Algorithm
selection
chooses
among
multiple
solvers
for
a
problem
instance.
Hyper-heuristics
develop
strategies
that
generate
or
select
heuristics
for
solving
problems.
AutoML
automates
model
selection
and
hyperparameter
tuning
in
machine
learning,
while
meta-learning
uses
past
experience
to
accelerate
learning
on
new
tasks.
Collectively,
these
activities
can
form
nested
or
hierarchical
optimization
loops,
where
an
outer
optimizer
searches
for
the
best
inner
configuration.
SMAC,
ParamILS,
Hyperband,
and
reinforcement
learning
approaches.
Practitioners
may
use
these
methods
to
minimize
runtime,
maximize
solution
quality,
or
improve
robustness
across
diverse
instances,
often
with
multi-objective
or
multi-task
objectives.
automated
decision-support
systems
in
operations
research
and
engineering.
Metaoptimization
also
intersects
with
metaheuristics,
using
higher-level
search
strategies
to
guide
lower-level
optimization,
and
with
meta-learning
to
transfer
knowledge
between
related
problems.
across
domains
or
distributions.