Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Independent component analysis, a new concept?
Signal Processing - Special issue on higher order statistics
Inducing Features of Random Fields
IEEE Transactions on Pattern Analysis and Machine Intelligence
Minimax entropy principle and its application to texture modeling
Neural Computation
Atomic Decomposition by Basis Pursuit
SIAM Journal on Scientific Computing
Neural Computation
Estimating Overcomplete Independent Component Bases for Image Windows
Journal of Mathematical Imaging and Vision
Training products of experts by minimizing contrastive divergence
Neural Computation
Discovering Multiple Constraints that are Frequently Approximately Satisfied
UAI '01 Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence
A Variational Method for Learning Sparse and Overcomplete Representations
Neural Computation
Learning Overcomplete Representations
Neural Computation
Face recognition by independent component analysis
IEEE Transactions on Neural Networks
Topographic Product Models Applied to Natural Scene Statistics
Neural Computation
Combining discriminative features to infer complex trajectories
ICML '06 Proceedings of the 23rd international conference on Machine learning
International Journal of Computer Vision
What kind of a graphical model is the brain?
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
Learning Deep Architectures for AI
Foundations and Trends® in Machine Learning
Learning Features by Contrasting Natural Images with Noise
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part II
Sparse deep belief net for handwritten digits classification
AICI'10 Proceedings of the 2010 international conference on Artificial intelligence and computational intelligence: Part I
Partial extraction of edge filters by cumulant-based ICA under highly overcomplete model
ICONIP'10 Proceedings of the 17th international conference on Neural information processing: models and applications - Volume Part II
The Journal of Machine Learning Research
Hi-index | 0.00 |
We present a new way of extending independent components analysis (ICA) to overcomplete representations. In contrast to the causal generative extensions of ICA which maintain marginal independence of sources, we define features as deterministic (linear) functions of the inputs. This assumption results in marginal dependencies among the features, but conditional independence of the features given the inputs. By assigning energies to the features a probability distribution over the input states is defined through the Boltzmann distribution. Free parameters of this model are trained using the contrastive divergence objective (Hinton, 2002). When the number of features is equal to the number of input dimensions this energy-based model reduces to noiseless ICA and we show experimentally that the proposed learning algorithm is able to perform blind source separation on speech data. In additional experiments we train overcomplete energy-based models to extract features from various standard data-sets containing speech, natural images, hand-written digits and faces.