Intersectional Group Fairness in Machine Learning

  • The importance of fairness in AI
  • Why AI fairness is even more critical today
  • Why intersectional group fairness is critical to improving AI fairness

ML Models Prone to Error

Before explaining why, the first question should be how do you detect and mitigate bias in European models to avoid a bad experience? For example, I know you may have heard about the Apple case and Gender discrimination complaints against J.P Morgan which was federally settled for fifty million dollars. Machine Learning models are often biased. The main issues currently faced by ML models are:

  • Wavering performance over time, decaying due to changes in use behavior or system errors
  • Data Drift — Production data can change vs training data — performance impacted with shifting data
  • Data Integrity — Data errors are common in ML pipelines. It’s difficult to catch before metrics impact outcomes
  • Fairness/Bias — Models can add or amplify bias. Regulatory and brand risk
  • Transparency — Models are increasingly a blackbox which make them difficult to debug

Intersectional Group Fairness

With that in mind, let’s talk about what is intersectional group fairness, understood to be Fairness along overlapping dimensions including race, gender, sexual orientation, age, disability, etc. It’s not just binary protected classes vs protected classes. It takes into account multiple dimensions.

Model Performance Management

Recently we introduced what we call a model performance management framework. Which suggests establishing a feedback loop in the Machine Learning lifecycle to help data scientists improve models more efficiently.

Bias Detection in Modelling

For our model, we decided to select fairness metrics that we think are widely used and that have been effective coverage, including disparate impact. Disparate impact is a form of indirect and intentional discrimination in which certain decisions disproportionately affect members of a protected group. Similarly to this, you have demographic parity and we decided to use both.

  • Real-time Fairness — Measure fairness on production data as well as on static datasets
  • Integration into Modelling — Receive real-time fairness alerts and address the issue when they matter
  • Built-in Explainability — Understand the impact of features to specific fairness metrics
  • Risk Mitigation — Trackback to any past incident or analyze future predictions to minimize risk

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
RE•WORK

RE•WORK

Bringing together the brightest minds in AI & Deep Learning from research & industry https://www.re-work.co/