Factorial Hidden Markov Models

  • Authors:
  • Zoubin Ghahramani;Michael I. Jordan

  • Affiliations:
  • Department of Computer Science, University of Toronto, Toronto, ON M5S 3H5, Canada. E-mail: zoubin@cs.toronto.edu;Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA. E-mail: jordan@psyche.mit.edu

  • Venue:
  • Machine Learning - Special issue on learning with probabilistic representations
  • Year:
  • 1997

Quantified Score

Hi-index 0.01

Visualization

Abstract

Hidden Markov models (HMMs) have proven to be one of the most widelyused tools for learning probabilistic models of time series data. Inan HMM, information about the past is conveyedthrough a single discrete variable—the hidden state. We discuss ageneralization of HMMs in which this state is factored into multiplestate variables and is therefore represented in a distributed manner.We describe an exact algorithm for inferring the posteriorprobabilities of the hidden state variables given the observations,and relate it to the forward–backward algorithm for HMMs and toalgorithms for more general graphical models. Due to the combinatorialnature of the hidden state representation, this exact algorithm isintractable. As in other intractable systems, approximate inferencecan be carried out using Gibbs sampling or variational methods. Within the variational framework, wepresent a structured approximation in which the the statevariables are decoupled, yielding a tractablealgorithm for learning the parameters of the model. Empiricalcomparisons suggest that these approximations are efficient andprovide accurate alternatives to the exact methods. Finally, we use thestructured approximation to model Bach‘s chorales and show thatfactorial HMMs can capture statistical structure in this data setwhich an unconstrained HMM cannot.