Home

FairnessConstraints

FairnessConstraints refer to a set of mathematical restrictions imposed on a learning problem to ensure that model predictions satisfy predefined fairness criteria across protected attribute groups. They are used to formalize obligations such as equal treatment or equal impact for different demographic groups during model training and evaluation.

Common fairness notions expressed by such constraints include demographic parity, equalized odds, and equal opportunity. Demographic

In practice, fairness constraints are incorporated through in-processing methods, which modify the learning objective to include

Challenges include trade-offs between predictive accuracy and fairness, choice of fairness notion appropriate to the context,

parity
requires
equal
prediction
rates
across
groups,
regardless
of
actual
outcomes.
Equalized
odds
demands
equal
false
positive
and
true
positive
rates
across
groups.
Equal
opportunity
focuses
on
equal
true
positive
rates.
In
addition
to
group-based
notions,
individual
fairness
aims
for
similar
individuals
to
receive
similar
predictions,
though
this
is
harder
to
express
with
simple
group
constraints.
FairnessConstraints
are
typically
written
as
equalities
or
inequalities
on
predicted
outcomes,
scores,
or
error
rates,
for
example
restricting
the
difference
in
average
predictions
between
groups
to
be
within
a
tolerance.
the
constraints,
often
via
Lagrangian
multipliers
or
penalty
terms.
They
can
also
be
used
in
pre-processing
steps
or
post-processing
adjustments,
but
in-processing
remains
a
common
approach
for
optimizing
both
accuracy
and
fairness
simultaneously.
and
availability
of
sensitive
attribute
information.
Additionally,
satisfying
one
notion
may
violate
another,
and
some
settings
may
render
the
constraints
infeasible.
FairnessConstraints
thus
require
careful
specification,
context-aware
selection
of
fairness
goals,
and
ongoing
evaluation.