Affordable Access

deepdyve-link
Publisher Website

Visual interpretation of [18F]Florbetaben PET supported by deep learning-based estimation of amyloid burden.

Authors
  • Kim, Ji-Young1, 2
  • Oh, Dongkyu1
  • Sung, Kiyoung3
  • Choi, Hongyoon4
  • Paeng, Jin Chul1
  • Cheon, Gi Jeong1, 5, 6
  • Kang, Keon Wook1
  • Lee, Dong Young3, 7
  • Lee, Dong Soo1, 8
  • 1 Department of Nuclear Medicine, Seoul National University Hospital, Seoul, Republic of Korea. , (North Korea)
  • 2 Department of Nuclear Medicine, Seoul National University Bundang Hospital, Seongnam, Republic of Korea. , (North Korea)
  • 3 Department of Neuropsychiatry, Seoul National University Hospital, Seoul, Republic of Korea. , (North Korea)
  • 4 Department of Nuclear Medicine, Seoul National University Hospital, Seoul, Republic of Korea. [email protected] , (North Korea)
  • 5 Institute on Aging, Seoul National University, Seoul, Republic of Korea. , (North Korea)
  • 6 Radiation Medicine Institute, Seoul National University College of Medicine, Seoul, Republic of Korea. , (North Korea)
  • 7 Department of Psychiatry, Seoul National University College of Medicine, Seoul, Republic of Korea. , (North Korea)
  • 8 Department of Molecular Medicine and Biopharmaceutical Sciences, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, Republic of Korea. , (North Korea)
Type
Published Article
Journal
European Journal of Nuclear Medicine
Publisher
Springer-Verlag
Publication Date
Apr 01, 2021
Volume
48
Issue
4
Pages
1116–1123
Identifiers
DOI: 10.1007/s00259-020-05044-x
PMID: 32990807
Source
Medline
Keywords
Language
English
License
Unknown

Abstract

Amyloid PET which has been widely used for noninvasive assessment of cortical amyloid burden is visually interpreted in the clinical setting. As a fast and easy-to-use visual interpretation support system, we analyze whether the deep learning-based end-to-end estimation of amyloid burden improves inter-reader agreement as well as the confidence of the visual reading. A total of 121 clinical routines [18F]Florbetaben PET images were collected for the randomized blind-reader study. The amyloid PET images were visually interpreted by three experts independently blind to other information. The readers qualitatively interpreted images without quantification at the first reading session. After more than 2-week interval, the readers additionally interpreted images with the quantification results provided by the deep learning system. The qualitative assessment was based on a 3-point BAPL score (1: no amyloid load, 2: minor amyloid load, and 3: significant amyloid load). The confidence score for each session was evaluated by a 3-point score (0: ambiguous, 1: probably, and 2: definite to decide). Inter-reader agreements for the visual reading based on a 3-point scale (BAPL score) calculated by Fleiss kappa coefficients were 0.46 and 0.76 for the visual reading without and with the deep learning system, respectively. For the two reading sessions, the confidence score of visual reading was improved at the visual reading session with the output (1.27 ± 0.078 for visual reading-only session vs. 1.66 ± 0.63 for a visual reading session with the deep learning system). Our results highlight the impact of deep learning-based one-step amyloid burden estimation system on inter-reader agreement and confidence of reading when applied to clinical routine amyloid PET reading.

Report this publication

Statistics

Seen <100 times