AUCul
AUCul, short for Area Under the Calibration Curve, is a metric used to evaluate the calibration quality of probabilistic models. It measures how closely predicted probabilities align with observed outcomes across the probability range, focusing on the accuracy of probability estimates rather than purely on discrimination or ranking.
To compute AUCul, predictions are typically partitioned into bins (for example, deciles). For each bin, the mean
Interpretation and relationship to other metrics: A higher AUCul generally indicates better calibration, meaning the model’s
Applications: AUCul is used in domains where accurate probability estimates are important, such as healthcare risk
See also: calibration curve, reliability diagram, ROC AUC, probabilistic forecasting.