State-Space Models: From the EM Algorithm to a Gradient Approach

  • Authors:
  • Rasmus Kongsgaard Olsson;Kaare Brandt Petersen;Tue Lehn-Schiøler

  • Affiliations:
  • rko@imm.dtu.dk;kbp@epital.dk;Technical University of Denmark, 2800 Kongens Lyngby, Denmark, tls@imm.dtu.dk

  • Venue:
  • Neural Computation
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Slow convergence is observed in the EM algorithm for linear state-space models. We propose to circumvent the problem by applying any off-the-shelf quasi-Newton-type optimizer, which operates on the gradient of the log-likelihood function. Such an algorithm is a practical alternative due to the fact that the exact gradient of the log-likelihood function can be computed by recycling components of the expectation-maximization (EM) algorithm. We demonstrate the efficiency of the proposed method in three relevant instances of the linear state-space model. In high signal-to-noise ratios, where EM is particularly prone to converge slowly, we show that gradient-based learning results in a sizable reduction of computation time.