outofdomain
Out-of-domain, often written as out-of-domain or out-of-distribution, refers to data inputs or situations that lie outside the distribution of data on which a model or system was trained. In machine learning and artificial intelligence, this concept helps indicate when a model’s predictions may be unreliable because the input does not resemble the training examples.
Examples of out-of-domain conditions include images from unseen classes, text with novel vocabulary, data corrupted by
Detection and handling of out-of-domain inputs are active areas of research and practice. Approaches include calibrating
See also: out-of-distribution, open-set recognition, anomaly detection, distribution shift.