
Abstract: Western societies are marked by diverse and extensive biases and inequality that are unavoidably embedded in the data used to train machine learning. Algorithms trained on biased data will, without intervention, produce biased outcomes and increase the inequality experienced by historically disadvantaged groups.
Recognising this problem, much work has emerged in recent years to test for bias in machine learning and AI systems using various bias metrics. In this paper we assessed the compatibility of technical fairness metrics and tests used in machine learning against the aims and purpose of EU non-discrimination law. Unfortunately 13/20 of the tests do not live up to UK and EU standards. One of the reasons is that they are developed in the US where a different notion of fairness and discrimination prevails.
We provide concrete recommendations including a user-friendly checklist for choosing the most appropriate fairness metric for uses of machine learning under EU non-discrimination law.
Bio: Bio Coming Soon!

Sandra Wachter, PhD
Title
Professor, Technology and Regulation | Oxford Internet Institute, University of Oxford
