Affordable Access

deepdyve-link
Publisher Website

Multimodal MR Synthesis via Modality-Invariant Latent Representation.

Authors
  • Chartsias, Agisilaos1
  • Joyce, Thomas1
  • Giuffrida, Mario Valerio2
  • Tsaftaris, Sotirios A3
  • 1 School of Engineering at The University of Edinburgh.
  • 2 School of Engineering at The University of Edinburgh and IMT Lucca and Alan Turing Institute of London.
  • 3 School of Engineering at The University of Edinburgh and Alan Turing Institute of London.
Type
Published Article
Journal
IEEE Transactions on Medical Imaging
Publisher
Institute of Electrical and Electronics Engineers
Publication Date
Oct 17, 2017
Pages
1–1
Identifiers
DOI: 10.1109/TMI.2017.2764326
PMID: 29053447
Source
Medline
Keywords
License
Unknown

Abstract

We propose a multi-input multi-output fully convolutional neural network model for MRI synthesis. The model is robust to missing data, as it benefits from, but does not require, additional input modalities. The model is trained end-to-end, and learns to embed all input modalities into a shared modalityinvariant latent space. These latent representations are then combined into a single fused representation, which is transformed into the target output modality with a learnt decoder. We avoid the need for curriculum learning by exploiting the fact that the various input modalities are highly correlated. We also show that by incorporating information from segmentation masks the model can both decrease its error and generate data with synthetic lesions. We evaluate our model on the ISLES and BRATS datasets and demonstrate statistically significant improvements over state-of-the-art methods for single input tasks. This improvement increases further when multiple input modalities are used, demonstrating the benefits of learning a common latent space, again resulting in a statistically significant improvement over the current best method. Lastly, we demonstrate our approach on non skull-stripped brain images, producing a statistically significant improvement over the previous best method. Code is made publicly available at https://github.com/agis85/multimodal brain synthesis.We propose a multi-input multi-output fully convolutional neural network model for MRI synthesis. The model is robust to missing data, as it benefits from, but does not require, additional input modalities. The model is trained end-to-end, and learns to embed all input modalities into a shared modalityinvariant latent space. These latent representations are then combined into a single fused representation, which is transformed into the target output modality with a learnt decoder. We avoid the need for curriculum learning by exploiting the fact that the various input modalities are highly correlated. We also show that by incorporating information from segmentation masks the model can both decrease its error and generate data with synthetic lesions. We evaluate our model on the ISLES and BRATS datasets and demonstrate statistically significant improvements over state-of-the-art methods for single input tasks. This improvement increases further when multiple input modalities are used, demonstrating the benefits of learning a common latent space, again resulting in a statistically significant improvement over the current best method. Lastly, we demonstrate our approach on non skull-stripped brain images, producing a statistically significant improvement over the previous best method. Code is made publicly available at https://github.com/agis85/multimodal brain synthesis.

Report this publication

Statistics

Seen <100 times