Affordable Access

Access to the full text

Neural network gradient Hamiltonian Monte Carlo

Authors
  • Li, Lingge1
  • Holbrook, Andrew2
  • Shahbaba, Babak1
  • Baldi, Pierre1
  • 1 University of California, Donald Bren School of Information and Computer Sciences, Irvine, USA , Irvine (United States)
  • 2 University of California, Department of Human Genetics, David Geffen School of Medicine, Los Angeles, USA , Los Angeles (United States)
Type
Published Article
Journal
Computational Statistics
Publisher
Springer Berlin Heidelberg
Publication Date
Jan 08, 2019
Volume
34
Issue
1
Pages
281–299
Identifiers
DOI: 10.1007/s00180-018-00861-z
Source
Springer Nature
Keywords
License
Yellow

Abstract

Hamiltonian Monte Carlo is a widely used algorithm for sampling from posterior distributions of complex Bayesian models. It can efficiently explore high-dimensional parameter spaces guided by simulated Hamiltonian flows. However, the algorithm requires repeated gradient calculations, and these computations become increasingly burdensome as data sets scale. We present a method to substantially reduce the computation burden by using a neural network to approximate the gradient. First, we prove that the proposed method still maintains convergence to the true distribution though the approximated gradient no longer comes from a Hamiltonian system. Second, we conduct experiments on synthetic examples and real data to validate the proposed method.

Report this publication

Statistics

Seen <100 times