Affordable Access

Access to the full text

Automatic glaucoma detection based on transfer induced attention network

Authors
  • Xu, Xi1
  • Guan, Yu1
  • Li, Jianqiang1
  • Ma, Zerui1
  • Zhang, Li2
  • Li, Li2
  • 1 Beijing University of Technology, Beijing, China , Beijing (China)
  • 2 Capital Medical University, Beijing, China , Beijing (China)
Type
Published Article
Journal
BioMedical Engineering OnLine
Publisher
Springer (Biomed Central Ltd.)
Publication Date
Apr 23, 2021
Volume
20
Issue
1
Identifiers
DOI: 10.1186/s12938-021-00877-5
Source
Springer Nature
Keywords
License
Green

Abstract

BackgroundGlaucoma is one of the causes that leads to irreversible vision loss. Automatic glaucoma detection based on fundus images has been widely studied in recent years. However, existing methods mainly depend on a considerable amount of labeled data to train the model, which is a serious constraint for real-world glaucoma detection.MethodsIn this paper, we introduce a transfer learning technique that leverages the fundus feature learned from similar ophthalmic data to facilitate diagnosing glaucoma. Specifically, a Transfer Induced Attention Network (TIA-Net) for automatic glaucoma detection is proposed, which extracts the discriminative features that fully characterize the glaucoma-related deep patterns under limited supervision. By integrating the channel-wise attention and maximum mean discrepancy, our proposed method can achieve a smooth transition between general and specific features, thus enhancing the feature transferability.ResultsTo delimit the boundary between general and specific features precisely, we first investigate how many layers should be transferred during training with the source dataset network. Next, we compare our proposed model to previously mentioned methods and analyze their performance. Finally, with the advantages of the model design, we provide a transparent and interpretable transferring visualization by highlighting the key specific features in each fundus image. We evaluate the effectiveness of TIA-Net on two real clinical datasets and achieve an accuracy of 85.7%/76.6%, sensitivity of 84.9%/75.3%, specificity of 86.9%/77.2%, and AUC of 0.929 and 0.835, far better than other state-of-the-art methods.ConclusionDifferent from previous studies applied classic CNN models to transfer features from the non-medical dataset, we leverage knowledge from the similar ophthalmic dataset and propose an attention-based deep transfer learning model for the glaucoma diagnosis task. Extensive experiments on two real clinical datasets show that our TIA-Net outperforms other state-of-the-art methods, and meanwhile, it has certain medical value and significance for the early diagnosis of other medical tasks.

Report this publication

Statistics

Seen <100 times