kernelprincipalacomponenten
Kernel Principal Component Analysis (KPCA) is a nonlinear extension of principal component analysis (PCA) that uses the kernel trick to operate in a high-dimensional feature space without explicit mapping. By performing linear PCA in this feature space, KPCA can capture nonlinear structure in the data while remaining computable in the input space.
In practice, KPCA forms a centered kernel matrix K with entries k(x_i, x_j) for all training samples.
Common kernel functions include Gaussian (RBF), polynomial, and sigmoid kernels. The choice of kernel determines the
KPCA is used for nonlinear dimensionality reduction, data visualization, and as a preprocessing step for clustering
Limitations include computational cost for large datasets, since constructing and decomposing the n x n kernel
Kernel PCA was introduced in the late 1990s by Schölkopf, Smola, and Müller as a kernelized generalization