Home

maxNoImprov

MaxNoImprov is a stopping criterion used in iterative optimization and heuristic search to terminate a run when no improvement in the objective value has been observed for a specified number of consecutive iterations. It focuses on stagnation rather than elapsed time, enabling more efficient use of computational resources.

Formally, during an optimization run, the current best objective value is tracked. A counter is incremented

MaxNoImprov is widely used in local search, hill-climbing, simulated annealing, genetic algorithms, and hyperparameter tuning. It

Related concepts include early stopping, patience parameters in machine learning, and stagnation limits. MaxNoImprov emphasizes robustness

each
iteration
that
does
not
produce
an
improvement
over
this
best
value
and
is
reset
to
zero
when
an
improvement
occurs.
If
the
counter
reaches
the
predefined
threshold
maxNoImprov,
the
search
stops.
Variants
may
define
“improvement”
with
strict
or
relaxed
criteria,
or
allow
resets
under
certain
conditions
such
as
minor
improvements.
helps
prevent
wasted
effort
in
regions
of
the
search
space
that
yield
no
progress
and
provides
a
simple,
problem-agnostic
termination
rule.
The
choice
of
the
threshold
is
problem
dependent:
too
small
a
value
risks
premature
termination,
while
too
large
a
value
may
waste
time
on
diminishing
returns.
In
practice,
practitioners
may
combine
maxNoImprov
with
other
criteria,
such
as
time
limits,
maximum
iterations,
or
restart
strategies,
and
may
adapt
the
threshold
dynamically
based
on
observed
progress
or
noise
levels.
to
minor
fluctuations
and
non-monotonic
objective
landscapes.