Home

Fixedshare

Fixed-share is an online learning algorithm designed to track the best predictor (or expert) in environments where the optimal choice may change over time. It extends the Hedge (or exponentially weighted) framework by incorporating a fixed amount of weight sharing across all experts, enabling the model to switch between experts with controlled flexibility.

In operation, the algorithm maintains a weight for each expert representing its current credibility. After observing

The parameter alpha governs the degree of switching: small values favor stability and slow adaptation, while

Applications of fixed-share include online prediction tasks where the environment is non-stationary and a fixed pool

the
loss
of
each
expert,
the
standard
Hedge
update
increases
or
decreases
weights
based
on
performance.
A
subsequent
sharing
step
distributes
a
fraction
of
the
total
weight,
controlled
by
a
parameter
alpha
in
[0,1],
among
all
experts.
This
sharing
allows
mass
to
migrate
from
currently
favored
experts
to
others,
facilitating
adaptation
when
the
best
expert
changes.
A
common
implementation
computes
intermediate
scores,
applies
the
exponential
update,
and
then
mixes
the
updated
scores
with
a
uniform
distribution
weighted
by
alpha.
larger
values
permit
quicker
responsiveness
to
changes
in
which
experts
perform
best.
Theoretical
results
provide
regret
guarantees
relative
to
the
best
sequence
of
experts
under
a
bound
on
the
number
of
switches.
Specifically,
the
regret
scales
with
the
number
of
allowed
switches
as
well
as
with
the
time
horizon
and
the
number
of
experts,
yielding
favorable
performance
when
the
optimal
predictor
changes
infrequently
compared
to
the
total
rounds.
of
predictors
is
available.
It
is
particularly
relevant
in
tracking
the
best
expert
over
time
and
in
ensemble
forecasting
settings
where
different
models
may
become
relevant
at
different
periods.
The
concept
appears
in
online
learning
literature
as
a
method
for
balancing
stability
with
adaptability.