Affordable Access

deepdyve-link
Publisher Website

Efficient inference in state-space models through adaptive learning in online Monte Carlo expectation maximization.

Authors
  • Henderson, Donna1
  • Lunter, Gerton2
  • 1 Wellcome Centre of Human Genetics, University of Oxford, Oxford, OX3 7BN UK.
  • 2 MRC Weatherall Institute of Molecular Medicine, Unversity of Oxford, Oxford, OX3 9DS UK.
Type
Published Article
Journal
Computational statistics
Publication Date
Jan 01, 2020
Volume
35
Issue
3
Pages
1319–1344
Identifiers
DOI: 10.1007/s00180-019-00937-4
PMID: 32764847
Source
Medline
Keywords
Language
English
License
Unknown

Abstract

Expectation maximization (EM) is a technique for estimating maximum-likelihood parameters of a latent variable model given observed data by alternating between taking expectations of sufficient statistics, and maximizing the expected log likelihood. For situations where sufficient statistics are intractable, stochastic approximation EM (SAEM) is often used, which uses Monte Carlo techniques to approximate the expected log likelihood. Two common implementations of SAEM, Batch EM (BEM) and online EM (OEM), are parameterized by a "learning rate", and their efficiency depend strongly on this parameter. We propose an extension to the OEM algorithm, termed Introspective Online Expectation Maximization (IOEM), which removes the need for specifying this parameter by adapting the learning rate to trends in the parameter updates. We show that our algorithm matches the efficiency of the optimal BEM and OEM algorithms in multiple models, and that the efficiency of IOEM can exceed that of BEM/OEM methods with optimal learning rates when the model has many parameters. Finally we use IOEM to fit two models to a financial time series. A Python implementation is available at https://github.com/luntergroup/IOEM.git. © The Author(s) 2019.

Report this publication

Statistics

Seen <100 times