A Single Loop EM Algorithm for the Mixture of Experts Architecture

  • Authors:
  • Yan Yang;Jinwen Ma

  • Affiliations:
  • Department of Information Science, School of Mathematical Sciences & LMAM, Peking University, Beijing, P. R. China 100871;Department of Information Science, School of Mathematical Sciences & LMAM, Peking University, Beijing, P. R. China 100871

  • Venue:
  • ISNN 2009 Proceedings of the 6th International Symposium on Neural Networks: Advances in Neural Networks - Part II
  • Year:
  • 2009

Quantified Score

Hi-index 0.01

Visualization

Abstract

The mixture of experts (ME) architecture is a powerful neural network model for supervised learning, which contains a number of ``expert''networks plus a gating network. The expectation-maximization (EM) algorithm can be used to learn the parameters of the ME architecture. In fact, there have already existed several methods to implement the EM algorithm, such as the IRLS algorithm, the ECM algorithm, and an approximation to the Newton-Raphson algorithm. The differences among these implementations rely on how to train the gating network, which results in a double-loop training procedure, i.e., there is an inner loop training procedure within the general or outer loop training procedure. In this paper, we propose a least mean square regression method to learn or compute the parameters for the gating network directly, which leads to a single loop (i.e., there is no inner loop training) EM algorithm for the ME architecture. It is demonstrated by the simulation experiments that our proposed EM algorithm outperforms the existing ones on both speed and classification accuracy.