Affordable Access

Access to the full text

Richer fusion network for breast cancer classification based on multimodal data

Authors
  • Yan, Rui1, 2
  • Zhang, Fa1
  • Rao, Xiaosong3
  • Lv, Zhilong1
  • Li, Jintao1
  • Zhang, Lingling3
  • Liang, Shuang1
  • Li, Yilin4
  • Ren, Fei1
  • Zheng, Chunhou5
  • Liang, Jun3
  • 1 Chinese Academy of Sciences, Beijing, China , Beijing (China)
  • 2 University of Chinese Academy of Sciences, Beijing, China , Beijing (China)
  • 3 Peking University International Hospital, Beijing, China , Beijing (China)
  • 4 Xingtai People’s Hospital, Hebei, China , Hebei (China)
  • 5 Anhui University, Hefei, China , Hefei (China)
Type
Published Article
Journal
BMC Medical Informatics and Decision Making
Publisher
Springer (Biomed Central Ltd.)
Publication Date
Apr 22, 2021
Volume
21
Issue
Suppl 1
Identifiers
DOI: 10.1186/s12911-020-01340-6
Source
Springer Nature
Keywords
License
Green

Abstract

BackgroundDeep learning algorithms significantly improve the accuracy of pathological image classification, but the accuracy of breast cancer classification using only single-mode pathological images still cannot meet the needs of clinical practice. Inspired by the real scenario of pathologists reading pathological images for diagnosis, we integrate pathological images and structured data extracted from clinical electronic medical record (EMR) to further improve the accuracy of breast cancer classification.MethodsIn this paper, we propose a new richer fusion network for the classification of benign and malignant breast cancer based on multimodal data. To make pathological image can be integrated more sufficient with structured EMR data, we proposed a method to extract richer multilevel feature representation of the pathological image from multiple convolutional layers. Meanwhile, to minimize the information loss for each modality before data fusion, we use the denoising autoencoder as a way to increase the low-dimensional structured EMR data to high-dimensional, instead of reducing the high-dimensional image data to low-dimensional before data fusion. In addition, denoising autoencoder naturally generalizes our method to make the accurate prediction with partially missing structured EMR data.ResultsThe experimental results show that the proposed method is superior to the most advanced method in terms of the average classification accuracy (92.9%). In addition, we have released a dataset containing structured data from 185 patients that were extracted from EMR and 3764 paired pathological images of breast cancer, which can be publicly downloaded from http://ear.ict.ac.cn/?page_id=1663.ConclusionsWe utilized a new richer fusion network to integrate highly heterogeneous data to leverage the structured EMR data to improve the accuracy of pathological image classification. Therefore, the application of automatic breast cancer classification algorithms in clinical practice becomes possible. Due to the generality of the proposed fusion method, it can be straightforwardly extended to the fusion of other structured data and unstructured data.

Report this publication

Statistics

Seen <100 times