fairnessmetrics
Fairness metrics are quantitative tools used to assess the equity of automated decision-making systems, such as machine learning models, in contexts ranging from credit approval to hiring and criminal justice. They provide a systematic framework for evaluating whether outcomes are unbiased across protected groups defined by sensitive attributes like race, gender, age, or disability status. While not exhaustive, common categories of fairness metrics include statistical parity, equalized odds, predictive equality, calibration, and individual fairness. Statistical parity examines whether the proportion of positive predictions is equal across groups; equalized odds requires equal false positive and false negative rates; predictive equality focuses on equal error rates for each outcome; calibration demands that predicted probabilities correspond to actual outcome frequencies within each group; individual fairness seeks to treat similar individuals similarly, regardless of group membership.
The selection of an appropriate metric depends on legal, ethical, and contextual considerations. In some jurisdictions,
Critiques of fairness metrics highlight that no metric fully encapsulates the complexity of social justice. Some