Home

priors

Priors, in Bayesian statistics, refer to the prior distribution over a parameter before observing data. They encode beliefs or uncertainty about the parameter and can incorporate information from previous studies, expert judgment, or theoretical considerations.

Priors can be informative, reflecting specific knowledge; noninformative or objective priors aim to exert minimal influence

In Bayesian inference, the posterior distribution combines the likelihood with the prior via Bayes' theorem: posterior

Choosing priors involves considerations of subjectivity and robustness. Analysts often perform prior sensitivity analysis, prior predictive

In practical applications, priors appear in machine learning as regularization in Bayesian neural networks and in

when
data
are
informative;
and
weakly
informative
priors
provide
regularization
to
stabilize
inferences.
They
may
be
proper
(integrate
to
one)
or
improper
(do
not).
Conjugate
priors
are
chosen
for
mathematical
convenience
because
the
posterior
is
of
the
same
distribution
family
as
the
prior.
is
proportional
to
likelihood
times
prior.
With
large
data,
the
influence
of
the
prior
diminishes,
but
with
sparse
data
it
can
dominate.
Conjugate
priors
yield
closed-form
posteriors;
common
examples
include
Beta
priors
for
binomial
probabilities,
Gamma
priors
for
Poisson
rates,
normal
priors
for
means
with
known
variance,
and
Dirichlet
priors
for
multinomial
probabilities.
checks,
or
use
hierarchical
priors
to
share
information
across
related
parameters.
In
model
selection
or
averaging,
priors
over
models
or
over
parameter
spaces
affect
results
and
must
be
stated.
other
probabilistic
models.
The
prior
structure
influences
learning,
uncertainty
quantification,
and
model
complexity,
and
is
a
core
component
of
Bayesian
inference.