Affordable Access

Access to the full text

The Neutrality Fallacy: When Algorithmic Fairness Interventions are (Not) Positive Action

Authors
  • Weerts, Hilde
  • Xenidis, Raphaële
  • Tarissan, Fabien
  • Olsen, Henrik Palmer
  • Pechenizkiy, Mykola
Type
Published Article
Publication Date
Apr 18, 2024
Submission Date
Apr 18, 2024
Identifiers
DOI: 10.1145/3630106.3659025
Source
arXiv
License
Green
External links

Abstract

Various metrics and interventions have been developed to identify and mitigate unfair outputs of machine learning systems. While individuals and organizations have an obligation to avoid discrimination, the use of fairness-aware machine learning interventions has also been described as amounting to 'algorithmic positive action' under European Union (EU) non-discrimination law. As the Court of Justice of the European Union has been strict when it comes to assessing the lawfulness of positive action, this would impose a significant legal burden on those wishing to implement fair-ml interventions. In this paper, we propose that algorithmic fairness interventions often should be interpreted as a means to prevent discrimination, rather than a measure of positive action. Specifically, we suggest that this category mistake can often be attributed to neutrality fallacies: faulty assumptions regarding the neutrality of fairness-aware algorithmic decision-making. Our findings raise the question of whether a negative obligation to refrain from discrimination is sufficient in the context of algorithmic decision-making. Consequently, we suggest moving away from a duty to 'not do harm' towards a positive obligation to actively 'do no harm' as a more adequate framework for algorithmic decision-making and fair ml-interventions.

Report this publication

Statistics

Seen <100 times