Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Independent component analysis, a new concept?
Signal Processing - Special issue on higher order statistics
Inducing Features of Random Fields
IEEE Transactions on Pattern Analysis and Machine Intelligence
Atomic Decomposition by Basis Pursuit
SIAM Journal on Scientific Computing
Neural Computation
Estimating Overcomplete Independent Component Bases for Image Windows
Journal of Mathematical Imaging and Vision
Training products of experts by minimizing contrastive divergence
Neural Computation
Discovering Multiple Constraints that are Frequently Approximately Satisfied
UAI '01 Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence
A Variational Method for Learning Sparse and Overcomplete Representations
Neural Computation
Learning Overcomplete Representations
Neural Computation
Minimax Entropy Principle and Its Application to Texture Modeling
Neural Computation
Face recognition by independent component analysis
IEEE Transactions on Neural Networks
A fast learning algorithm for deep belief nets
Neural Computation
On the Spatial Statistics of Optical Flow
International Journal of Computer Vision
Visual Recognition and Inference Using Dynamic Overcomplete Sparse Learning
Neural Computation
Expectation Truncation and the Benefits of Preselection In Training Generative Models
The Journal of Machine Learning Research
Hi-index | 0.00 |
We present a new way of extending independent components analysis(ICA) to overcomplete representations. In contrast to the causalgenerative extensions of ICA which maintain marginal independenceof sources, we define features as deterministic(linear) functions of the inputs. This assumption results inmarginal dependencies among the features, but conditionalindependence of the features given the inputs. By assigningenergies to the features a probability distribution over the inputstates is defined through the Boltzmann distribution. Freeparameters of this model are trained using the contrastivedivergence objective (Hinton, 2002). When the number of features isequal to the number of input dimensions this energy-based modelreduces to noiseless ICA and we show experimentally that theproposed learning algorithm is able to perform blind sourceseparation on speech data. In additional experiments we trainovercomplete energy-based models to extract features from variousstandard data-sets containing speech, natural images, hand-writtendigits and faces.