Marginalized neural network mixtures for large-scale regression

  • Authors:
  • Miguel Lázaro-Gredilla;Aníbal R. Figueiras-Vidal

  • Affiliations:
  • Department of Signal Processing and Communications, Universidad Carlos III de Madrid, Madrid, Spain;Department of Signal Processing and Communications, Universidad Carlos III de Madrid, Madrid, Spain

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

For regression tasks, traditional neural networks (NNs) have been superseded by Gaussian processes, which provide probabilistic predictions (input-dependent error bars), improved accuracy, and virtually no overfitting. Due to their high computational cost, in scenarios with massive data sets, one has to resort to sparse Gaussian processes, which strive to achieve similar performance with much smaller computational effort. In this context, we introduce a mixture of NNs with marginalized output weights that can both provide probabilistic predictions and improve on the performance of sparse Gaussian processes, at the same computational cost. The effectiveness of this approach is shown experimentally on some representative large data sets.