Modeling with Mixtures of Linear Regressions

  • Authors:
  • Kert Viele;Barbara Tong

  • Affiliations:
  • Department of Statistics, University of Kentucky, Lexington, KY 40506-0027, USA;Department of Statistics, University of Kentucky, Lexington, KY 40506-0027, USA

  • Venue:
  • Statistics and Computing
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

Consider data (x1,y1),…,(xn,yn), where each xi may be vector valued, and the distribution of yi given xi is a mixture of linear regressions. This provides a generalization of mixture models which do not include covariates in the mixture formulation. This mixture of linear regressions formulation has appeared in the computer science literature under the name “Hierarchical Mixtures of Experts” model.This model has been considered from both frequentist and Bayesian viewpoints. We focus on the Bayesian formulation. Previously, estimation of the mixture of linear regression model has been done through straightforward Gibbs sampling with latent variables. This paper contributes to this field in three major areas. First, we provide a theoretical underpinning to the Bayesian implementation by demonstrating consistency of the posterior distribution. This demonstration is done by extending results in Barron, Schervish and Wasserman (Annals of Statistics 27: 536–561, 1999) on bracketing entropy to the regression setting. Second, we demonstrate through examples that straightforward Gibbs sampling may fail to effectively explore the posterior distribution and provide alternative algorithms that are more accurate. Third, we demonstrate the usefulness of the mixture of linear regressions framework in Bayesian robust regression. The methods described in the paper are applied to two examples.