Energy-based models for sparse overcomplete representations

  • Authors:
  • Yee Whye Teh;Max Welling;Simon Osindero;Geoffrey E. Hinton

  • Affiliations:
  • Department of Computer Science, University of Toronto, 10 King's College Road, Toronto M5S 3G4, Canada;Department of Computer Science, University of Toronto, 10 King's College Road, Toronto M5S 3G4, Canada;Gatsby Computational Neuroscience Unit, University College London, 17 Queen Square, London WC1N 3AR, United Kingdom;Department of Computer Science, University of Toronto, 10 King's College Road, Toronto M5S 3G4, Canada

  • Venue:
  • The Journal of Machine Learning Research
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a new way of extending independent components analysis (ICA) to overcomplete representations. In contrast to the causal generative extensions of ICA which maintain marginal independence of sources, we define features as deterministic (linear) functions of the inputs. This assumption results in marginal dependencies among the features, but conditional independence of the features given the inputs. By assigning energies to the features a probability distribution over the input states is defined through the Boltzmann distribution. Free parameters of this model are trained using the contrastive divergence objective (Hinton, 2002). When the number of features is equal to the number of input dimensions this energy-based model reduces to noiseless ICA and we show experimentally that the proposed learning algorithm is able to perform blind source separation on speech data. In additional experiments we train overcomplete energy-based models to extract features from various standard data-sets containing speech, natural images, hand-written digits and faces.