Affordable Access

deepdyve-link
Publisher Website

Trading off accuracy and explainability in AI decision-making: findings from 2 citizens' juries.

Authors
  • van der Veer, Sabine N1
  • Riste, Lisa2, 3
  • Cheraghi-Sohi, Sudeh2, 4
  • Phipps, Denham L3
  • Tully, Mary P3
  • Bozentko, Kyle5
  • Atwood, Sarah5
  • Hubbard, Alex6
  • Wiper, Carl6
  • Oswald, Malcolm7, 8
  • Peek, Niels1, 2
  • 1 Centre for Health Informatics, Division of Informatics, Imaging and Data Science, Manchester Academic Health Science Centre, The University of Manchester, Manchester, UK.
  • 2 NIHR Greater Manchester Patient Safety Translational Research Centre, School of Health Sciences, Manchester Academic Health Science Centre, The University of Manchester, Manchester, UK.
  • 3 Division of Pharmacy and Optometry, School of Health Sciences, The University of Manchester, Manchester, UK.
  • 4 Division of Population Health, Health Services Research & Primary Care, School of Health Sciences, The University of Manchester, Manchester, UK.
  • 5 Jefferson Center, Saint Paul, Minnesota, USA.
  • 6 Information Commissioner's Office, Wilmslow, UK.
  • 7 School of Law, Faculty of Humanities, The University of Manchester, Manchester, UK.
  • 8 Citizens' Juries CIC, Manchester, UK.
Type
Published Article
Journal
Journal of the American Medical Informatics Association
Publisher
Oxford University Press
Publication Date
Sep 18, 2021
Volume
28
Issue
10
Pages
2128–2138
Identifiers
DOI: 10.1093/jamia/ocab127
PMID: 34333646
Source
Medline
Keywords
Language
English
License
Unknown

Abstract

To investigate how the general public trades off explainability versus accuracy of artificial intelligence (AI) systems and whether this differs between healthcare and non-healthcare scenarios. Citizens' juries are a form of deliberative democracy eliciting informed judgment from a representative sample of the general public around policy questions. We organized two 5-day citizens' juries in the UK with 18 jurors each. Jurors considered 3 AI systems with different levels of accuracy and explainability in 2 healthcare and 2 non-healthcare scenarios. Per scenario, jurors voted for their preferred system; votes were analyzed descriptively. Qualitative data on considerations behind their preferences included transcribed audio-recordings of plenary sessions, observational field notes, outputs from small group work and free-text comments accompanying jurors' votes; qualitative data were analyzed thematically by scenario, per and across AI systems. In healthcare scenarios, jurors favored accuracy over explainability, whereas in non-healthcare contexts they either valued explainability equally to, or more than, accuracy. Jurors' considerations in favor of accuracy regarded the impact of decisions on individuals and society, and the potential to increase efficiency of services. Reasons for emphasizing explainability included increased opportunities for individuals and society to learn and improve future prospects and enhanced ability for humans to identify and resolve system biases. Citizens may value explainability of AI systems in healthcare less than in non-healthcare domains and less than often assumed by professionals, especially when weighed against system accuracy. The public should therefore be actively consulted when developing policy on AI explainability. © The Author(s) 2021. Published by Oxford University Press on behalf of the American Medical Informatics Association.

Report this publication

Statistics

Seen <100 times