Affordable Access

deepdyve-link
Publisher Website

Explicability of artificial intelligence in radiology: Is a fifth bioethical principle conceptually necessary?

Authors
  • Ursin, Frank1
  • Timmermann, Cristian1
  • Steger, Florian1
  • 1 Institute of the History, Philosophy and Ethics of Medicine, Ulm University, Ulm, Germany. , (Germany)
Type
Published Article
Journal
Bioethics
Publication Date
Feb 01, 2022
Volume
36
Issue
2
Pages
143–153
Identifiers
DOI: 10.1111/bioe.12918
PMID: 34251687
Source
Medline
Keywords
Language
English
License
Unknown

Abstract

Recent years have witnessed intensive efforts to specify which requirements ethical artificial intelligence (AI) must meet. General guidelines for ethical AI consider a varying number of principles important. A frequent novel element in these guidelines, that we have bundled together under the term explicability, aims to reduce the black-box character of machine learning algorithms. The centrality of this element invites reflection on the conceptual relation between explicability and the four bioethical principles. This is important because the application of general ethical frameworks to clinical decision-making entails conceptual questions: Is explicability a free-standing principle? Is it already covered by the well-established four bioethical principles? Or is it an independent value that needs to be recognized as such in medical practice? We discuss these questions in a conceptual-ethical analysis, which builds upon the findings of an empirical document analysis. On the example of the medical specialty of radiology, we analyze the position of radiological associations on the ethical use of medical AI. We address three questions: Are there references to explicability or a similar concept? What are the reasons for such inclusion? Which ethical concepts are referred to? © 2021 The Authors. Bioethics published by John Wiley & Sons Ltd.

Report this publication

Statistics

Seen <100 times