Affordable Access

Publisher Website

Improving Intensive Care Unit Early Readmission Prediction Using Optimized and Explainable Machine Learning.

Authors
  • González-Nóvoa, José A1
  • Campanioni, Silvia1
  • Busto, Laura1
  • Fariña, José2
  • Rodríguez-Andina, Juan J2
  • Vila, Dolores3
  • Íñiguez, Andrés4
  • Veiga, César1
  • 1 Galicia Sur Health Research Institute (IIS Galicia Sur), Álvaro Cunqueiro Hospital, 36310 Vigo, Spain. , (Spain)
  • 2 Department of Electronic Technology, University of Vigo, 36310 Vigo, Spain. , (Spain)
  • 3 Intensive Care Unit Department, Complexo Hospitalario Universitario de Vigo (SERGAS), Álvaro Cunqueiro Hospital, 36213 Vigo, Spain. , (Spain)
  • 4 Cardiology Department, Complexo Hospitalario Universitario de Vigo (SERGAS), Álvaro Cunqueiro Hospital, 36213 Vigo, Spain. , (Spain)
Type
Published Article
Journal
International Journal of Environmental Research and Public Health
Publisher
MDPI AG
Publication Date
Feb 16, 2023
Volume
20
Issue
4
Identifiers
DOI: 10.3390/ijerph20043455
PMID: 36834150
Source
Medline
Keywords
Language
English
License
Unknown

Abstract

It is of great interest to develop and introduce new techniques to automatically and efficiently analyze the enormous amount of data generated in today's hospitals, using state-of-the-art artificial intelligence methods. Patients readmitted to the ICU in the same hospital stay have a higher risk of mortality, morbidity, longer length of stay, and increased cost. The methodology proposed to predict ICU readmission could improve the patients' care. The objective of this work is to explore and evaluate the potential improvement of existing models for predicting early ICU patient readmission by using optimized artificial intelligence algorithms and explainability techniques. In this work, XGBoost is used as a predictor model, combined with Bayesian techniques to optimize it. The results obtained predicted early ICU readmission (AUROC of 0.92 ± 0.03) improves state-of-the-art consulted works (whose AUROC oscillate between 0.66 and 0.78). Moreover, we explain the internal functioning of the model by using Shapley Additive Explanation-based techniques, allowing us to understand the model internal performance and to obtain useful information, as patient-specific information, the thresholds from which a feature begins to be critical for a certain group of patients, and the feature importance ranking.

Report this publication

Statistics

Seen <100 times