LOOCV
Leave-One-Out Cross-Validation (LOOCV) is a cross-validation method used to estimate the predictive performance of a statistical model. In LOOCV, each observation in a dataset is used once as a test example, while the remaining n-1 observations form the training set. The process is repeated n times, producing n predicted test values; the LOOCV error is the average of the losses across these held-out observations.
Procedure: Given a dataset with n observations, for i from 1 to n, fit the model on
Advantages and limitations: LOOCV has a low bias as an estimator of generalization error because almost all