Affordable Access

deepdyve-link
Publisher Website

An unsupervised learning approach to ultrasound strain elastography with spatio-temporal consistency

Authors
  • Delaunay, Rémi
  • Hu, Yipeng
  • Vercauteren, Tom
Type
Published Article
Journal
Physics in Medicine and Biology
Publisher
IOP Publishing
Publication Date
Sep 03, 2021
Volume
66
Issue
17
Identifiers
DOI: 10.1088/1361-6560/ac176a
PMID: 34298531
PMCID: PMC8417818
Source
PubMed Central
Keywords
Disciplines
  • Paper
License
Unknown

Abstract

Quasi-static ultrasound elastography (USE) is an imaging modality that measures deformation (i.e. strain) of tissue in response to an applied mechanical force. In USE, the strain modulus is traditionally obtained by deriving the displacement field estimated between a pair of radio-frequency data. In this work we propose a recurrent network architecture with convolutional long-short-term memory decoder blocks to improve displacement estimation and spatio-temporal continuity between time series ultrasound frames. The network is trained in an unsupervised way, by optimising a similarity metric between the reference and compressed image. Our training loss is also composed of a regularisation term that preserves displacement continuity by directly optimising the strain smoothness, and a temporal continuity term that enforces consistency between successive strain predictions. In addition, we propose an open-access in vivo database for quasi-static USE, which consists of radio-frequency data sequences captured on the arm of a human volunteer. Our results from numerical simulation and in vivo data suggest that our recurrent neural network can account for larger deformations, as compared with two other feed-forward neural networks. In all experiments, our recurrent network outperformed the state-of-the-art for both learning-based and optimisation-based methods, in terms of elastographic signal-to-noise ratio, strain consistency, and image similarity. Finally, our open-source code provides a 3D-slicer visualisation module that can be used to process ultrasound RF frames in real-time, at a rate of up to 20 frames per second, using a standard GPU.

Report this publication

Statistics

Seen <100 times