Hierarchical mixtures of experts and the EM algorithm

  • Authors:
  • Michael I. Jordan;Robert A. Jacobs

  • Affiliations:
  • -;-

  • Venue:
  • Neural Computation
  • Year:
  • 1994

Quantified Score

Hi-index 0.05

Visualization

Abstract

We present a tree-structured architecture for supervisedlearning. The statistical model underlying the architecture is ahierarchical mixture model in which both the mixture coefficientsand the mixture components are generalized linear models (GLIM's).Learning is treated as a maximum likelihood problem; in particular,we present an Expectation-Maximization (EM) algorithm for adjustingthe parameters of the architecture. We also develop an on-linelearning algorithm in which the parameters are updatedincrementally. Comparative simulation results are presented in therobot dynamics domain.