ROCkurve
The ROCkurve (also known as the ROC curve) is a graphical representation used to evaluate the performance of a binary classifier across threshold settings. The curve plots the true positive rate (sensitivity) on the vertical axis against the false positive rate (1 − specificity) on the horizontal axis. To construct it, one computes TPR and FPR for a range of decision thresholds using the model's predicted scores or probabilities.
The area under the ROCkurve (AUC) is a summary statistic that measures discriminative ability. An AUC of
ROC curves are used to compare models and to select operating points by trading off sensitivity and
Limitations: The ROC curve does not reflect class prevalence or calibration of predicted probabilities, and AUC