BNN
Bayesian neural networks (BNNs) are neural networks in which the network parameters, typically weights and biases, are treated as random variables with prior distributions. This yields probabilistic predictions and a distribution over functions rather than single point estimates.
In a BNN, the goal is to compute the posterior distribution over weights given data, p(w|D) ∝ p(D|w)
Common approaches include variational inference with tractable approximations, Markov chain Monte Carlo, and Laplace or other
BNNs offer calibrated uncertainty and robustness to overfitting, with predictions that express uncertainty. They face challenges
Historically, Bayesian neural networks emerged from work in the 1990s and 2000s (notably by Radford Neal) and
BNNs are related to Gaussian processes in the limit of infinite width and are often contrasted with