Fairness in Binary Classification
To detect and mitigate societal bias in binary classification, you can use the
functions in Statistics and Machine Learning Toolbox™. First, use
evaluate the fairness of a data set or classification model using bias and group metrics. Then,
disparateImpactRemover to remove the disparate impact of a sensitive attribute, or
to optimize the classification threshold.
functions provide preprocessing techniques that allow you to adjust your predictor data before
training (or retraining) a classifier. The
provides a postprocessing technique that adjusts labels near prediction boundaries for a
trained classifier. To assess the final model behavior, you can use the
fairnessMetrics function as well as various interpretability functions.
For more information, see Interpret Machine Learning Models.
Preprocessing Bias Mitigation
- Introduction to Fairness in Binary Classification
Detect and mitigate societal bias in machine learning by using the
- Explore Fairness Metrics for Credit Scoring Model (Risk Management Toolbox)
- Bias Mitigation in Credit Scoring by Reweighting (Risk Management Toolbox)
- Bias Mitigation in Credit Scoring by Disparate Impact Removal (Risk Management Toolbox)