Affordable Access

deepdyve-link
Publisher Website

Differences in cohort study data affect external validation of artificial intelligence models for predictive diagnostics of dementia - lessons for translation into clinical practice

Authors
  • Birkenbihl, Colin1, 2
  • Emon, Mohammad Asif1, 2
  • Vrooman, Henri3, 3
  • Westwood, Sarah4
  • Lovestone, Simon4
  • Hofmann-Apitius, Martin1, 2
  • Fröhlich, Holger1, 2, 5
  • 1 Fraunhofer Institute for Algorithms and Scientific Computing (SCAI),
  • 2 Rheinische Friedrich-Wilhelms-Universität Bonn,
  • 3 Erasmus MC University Medical Center,
  • 4 University of Oxford,
  • 5 UCB Biosciences GmbH,
Type
Published Article
Journal
The EPMA Journal
Publisher
Springer International Publishing
Publication Date
Jun 22, 2020
Volume
11
Issue
3
Pages
367–376
Identifiers
DOI: 10.1007/s13167-020-00216-z
PMID: 32843907
PMCID: PMC7429672
Source
PubMed Central
Keywords
Disciplines
  • Research
License
Unknown

Abstract

Artificial intelligence (AI) approaches pose a great opportunity for individualized, pre-symptomatic disease diagnosis which plays a key role in the context of personalized, predictive, and finally preventive medicine (PPPM). However, to translate PPPM into clinical practice, it is of utmost importance that AI-based models are carefully validated. The validation process comprises several steps, one of which is testing the model on patient-level data from an independent clinical cohort study. However, recruitment criteria can bias statistical analysis of cohort study data and impede model application beyond the training data. To evaluate whether and how data from independent clinical cohort studies differ from each other, this study systematically compares the datasets collected from two major dementia cohorts, namely, the Alzheimer’s Disease Neuroimaging Initiative (ADNI) and AddNeuroMed. The presented comparison was conducted on individual feature level and revealed significant differences among both cohorts. Such systematic deviations can potentially hamper the generalizability of results which were based on a single cohort dataset. Despite identified differences, validation of a previously published, ADNI trained model for prediction of personalized dementia risk scores on 244 AddNeuroMed subjects was successful: External validation resulted in a high prediction performance of above 80% area under receiver operator characteristic curve up to 6 years before dementia diagnosis. Propensity score matching identified a subset of patients from AddNeuroMed, which showed significantly smaller demographic differences to ADNI. For these patients, an even higher prediction performance was achieved, which demonstrates the influence systematic differences between cohorts can have on validation results. In conclusion, this study exposes challenges in external validation of AI models on cohort study data and is one of the rare cases in the neurology field in which such external validation was performed. The presented model represents a proof of concept that reliable models for personalized predictive diagnostics are feasible, which, in turn, could lead to adequate disease prevention and hereby enable the PPPM paradigm in the dementia field. Electronic supplementary material The online version of this article (10.1007/s13167-020-00216-z) contains supplementary material, which is available to authorized users.

Report this publication

Statistics

Seen <100 times