Mixtures of linear regressions
Computational Statistics & Data Analysis
Hierarchical mixtures of experts and the EM algorithm
Neural Computation
Mixtures of regressions with predictor-dependent mixing proportions
Computational Statistics & Data Analysis
On convergence rates of mixtures of polynomial experts
Neural Computation
Mixtures of regressions with changepoints
Statistics and Computing
Hi-index | 0.00 |
Consider data (x1,y1),…,(xn,yn), where each xi may be vector valued, and the distribution of yi given xi is a mixture of linear regressions. This provides a generalization of mixture models which do not include covariates in the mixture formulation. This mixture of linear regressions formulation has appeared in the computer science literature under the name “Hierarchical Mixtures of Experts” model.This model has been considered from both frequentist and Bayesian viewpoints. We focus on the Bayesian formulation. Previously, estimation of the mixture of linear regression model has been done through straightforward Gibbs sampling with latent variables. This paper contributes to this field in three major areas. First, we provide a theoretical underpinning to the Bayesian implementation by demonstrating consistency of the posterior distribution. This demonstration is done by extending results in Barron, Schervish and Wasserman (Annals of Statistics 27: 536–561, 1999) on bracketing entropy to the regression setting. Second, we demonstrate through examples that straightforward Gibbs sampling may fail to effectively explore the posterior distribution and provide alternative algorithms that are more accurate. Third, we demonstrate the usefulness of the mixture of linear regressions framework in Bayesian robust regression. The methods described in the paper are applied to two examples.