Affordable Access

A New Class of EM Algorithms. Escaping Local Minima and Handling Intractable Sampling

Authors
  • Allassonnière, Stéphanie
  • Chevallier, Juliette
Publication Date
Feb 26, 2019
Source
HAL-UPMC
Keywords
Language
English
License
Unknown
External links

Abstract

The expectation-maximization (EM) algorithm is a powerful computational technique for maximum likelihood estimation in incomplete data models. When the expectation step cannot be performed in closed form, a stochastic approximation of EM (SAEM) can be used. The convergence of the SAEM toward local maxima of the observed likelihood has been proved and its numerical efficiency has been demonstrated. However, despite appealing features, the limit position of this algorithm can strongly depend on its starting position. Moreover, sampling from the posterior distribution may be intractable or have a high computational cost. To cope with this two issues, we propose here a new stochastic approximation version of the EM in which we do not sample from the exact distribution in the expectation phase of the procedure. We first prove the convergence of this algorithm toward local maxima of the observed likelihood. Then, we propose an instantiation of this general procedure to favor convergence toward global maxima. Experiments on synthetic and real data highlight the performance of this algorithm in comparison to the SAEM.

Report this publication

Statistics

Seen <100 times