Abstract: In recent years fairness in machine learning (ML) and artificial intelligence (AI) has emerged as a highly active area of research and development. Most define fairness in simple terms, where fairness means reducing gaps in performance or outcomes between demographic groups while preserving as much of the accuracy of the original system as possible. This oversimplification of equality through fairness measures is troubling. Many current fairness measures suffer from both fairness and performance degradation, or "levelling down," where fairness is achieved by making every group worse off, or by bringing better performing groups down to the level of the worst off. When fairness can only be achieved by making everyone worse off in material or relational terms through injuries of stigma, loss of solidarity, unequal concern, and missed opportunities for substantive equality, something would appear to have gone wrong in translating the vague concept of 'fairness' into practice. In this talk I will examine the causes and prevalence of levelling down across fairML, and explore possible justifications and criticisms based on philosophical and legal theories of equality and distributive justice, as well as equality law jurisprudence. FairML does not currently engage in the type of measurement, reporting, or analysis necessary to justify levelling down in practice. I will propose a first step towards substantive equality in fairML: "levelling up" systems by design through enforcement of minimum acceptable harm thresholds, or "minimum rate constraints," as fairness constraints. I will likewise propose an alternative harms-based framework to counter the oversimplified egalitarian framing currently dominant in the field and push future discussion more towards substantive equality opportunities and away from strict egalitarianism by default.
Bio: Professor Brent Mittelstadt is an Associate Professor, Senior Research Fellow, and Director of Research at the Oxford Internet Institute, University of Oxford. He leads the Governance of Emerging Technologies (GET) research programme which works across ethics, law, and emerging information technologies. He is a prominent data ethicist and philosopher specializing in AI ethics, algorithmic fairness and explainability, and technology law and policy. Prof. Mittelstadt is the author of foundational works addressing the ethics of algorithms, AI, and Big Data; fairness, accountability, and transparency in machine learning; data protection and non-discrimination law; group privacy; ethical auditing of automated systems; and digital epidemiology and public health ethics. His contributions in these areas are widely cited and have been implemented by researchers, policy-makers, and companies internationally, featuring in policy proposals and guidelines from the UK government, Information Commissioner’s Office, and European Commission, as well as products from Google, Amazon, and Microsoft.