Home

preferencebased

Preferencebased, often written as preference-based, refers to approaches that use preferences rather than explicit numerical utilities to guide decisions, learn models, or optimize outcomes. It is used across decision theory, machine learning, economics, and human–computer interaction.

In decision making and optimization, systems collect ordinal judgments—rankings or pairwise comparisons—about options. From these, they

In machine learning, preference-based learning learns from user preferences to rank items or predict choices. Tasks

Advantages of preferencebased methods include natural handling of subjective criteria and insensitivity to scale, with the

Applications span personalized recommendations, search ranking, product configuration, and policy selection in multi-criteria settings, as well

construct
an
order
or
a
latent
utility
with
ordinal
information
and
select
the
top
alternatives,
or
optimize
with
respect
to
a
latent
preference
model.
Methods
include
pairwise
comparison,
rank
aggregation,
and
Bayesian
preference
elicitation,
as
well
as
preference-based
reinforcement
learning.
include
learning
to
rank,
ranking-based
recommender
systems,
and
Bayesian
optimization
with
preferences.
Active
learning
can
query
users
for
the
most
informative
comparisons
to
reduce
sample
complexity.
ability
to
integrate
multiple
criteria
via
induced
preferences.
Limitations
include
data
intensity,
susceptibility
to
noisy
or
inconsistent
preferences,
issues
of
collinearity,
and
the
need
for
careful
elicitation
design.
Evaluation
often
relies
on
rank-based
metrics
rather
than
absolute
error,
which
can
complicate
interpretation.
as
interactive
systems
that
adapt
to
user
choices.