outofsamplesplits
Out-of-sample splits refer to partitioning a dataset into training and evaluation subsets in which the evaluation data are not used during model training. The purpose is to estimate how a model will perform on new, unseen observations and to gauge its generalization ability rather than its fit to the training data. In practice, the term is often encountered as out-of-sample testing or evaluation, and the construct may appear in code or documentation as a single word like outofsamplesplits.
Common strategies for creating out-of-sample splits include holdout methods, where a dataset is divided into a
The main role of out-of-sample evaluation is to compare models, tune hyperparameters, and obtain an unbiased
In practice, out-of-sample evaluation is linked to backtesting in finance and forecast verification in other domains.