AI and Bias: How to detect it and how to prevent it

Abstract: 

Western societies are marked by diverse and extensive biases and inequality that are unavoidably embedded in the data used to train machine learning. Algorithms trained on biased data will, without intervention, produce biased outcomes and increase the inequality experienced by historically disadvantaged groups.

Recognising this problem, much work has emerged in recent years to test for bias in machine learning and AI systems using various bias metrics. In this paper we assessed the compatibility of technical fairness metrics and tests used in machine learning against the aims and purpose of EU non-discrimination law. Unfortunately 13/20 of the tests do not live up to UK and EU standards. One of the reasons is that they are developed in the US where a different notion of fairness and discrimination prevails.

We provide concrete recommendations including a user-friendly checklist for choosing the most appropriate fairness metric for uses of machine learning under EU non-discrimination law.

Bio: 

Bio Coming Soon!

Open Data Science

 

 

 

Open Data Science
One Broadway
Cambridge, MA 02142
info@odsc.com

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from Youtube
Vimeo
Consent to display content from Vimeo
Google Maps
Consent to display content from Google