Home

Gaussianprocess

Gaussianprocess is a probabilistic model that defines a distribution over functions. In this framework, a function f mapping an input x to a real value is assumed to be drawn from a Gaussian process, which is specified by a mean function m(x) and a covariance function k(x, x'). For any finite set of inputs x1, ..., xn, the function values f(x1), ..., f(xn) follow a multivariate normal distribution with mean vector [m(x1), ..., m(xn)] and covariance matrix [k(xi, xj)].

A common setting introduces observed data y with noise: y_i = f(x_i) + ε_i, where ε_i are independent

Inference in Gaussian processes yields a Gaussian posterior over function values at any test inputs X* given

m*(X*) = m(X*) + K(X*,X) [K(X,X) + σ^2 I]^{-1} (y - m(X))

cov*(X*) = K(X*,X*) - K(X*,X) [K(X,X) + σ^2 I]^{-1} K(X, X*)

These formulas provide predictive distributions for new points, including uncertainty. Kernels encode assumptions about function properties;

Gaussian processes are used in regression, time-series analysis, spatial statistics, Bayesian optimization, and surrogate modeling. They

Gaussian
errors
with
variance
σ^2.
The
prior
over
function
values
at
the
inputs
X
=
(x1,
...,
xn)
is
f(X)
~
N(m(X),
K(X,X))
with
K(X,X)_{ij}
=
k(x_i,
x_j).
The
marginal
likelihood
of
the
observations
is
p(y|X)
=
N(y;
m(X),
K(X,X)
+
σ^2
I).
data
X
and
y.
The
posterior
mean
and
covariance
are:
common
choices
include
the
squared
exponential
(RBF),
Matérn,
and
periodic
kernels.
Hyperparameters
in
m
and
k
(and
σ)
are
often
learned
by
maximizing
the
marginal
likelihood
or
treated
in
a
fully
Bayesian
way.
offer
flexible,
nonparametric
modeling
with
principled
uncertainty
quantification,
while
computational
challenges
arise
for
large
datasets,
motivating
scalable
approaches
such
as
sparse
or
inducing-point
methods.