Information processing in dynamical systems: foundations of harmony theory
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Connectionist learning of belief networks
Artificial Intelligence
Boltzmann machine learning using mean field theory and linear response correction
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
An introduction to variational methods for graphical models
Learning in graphical models
A view of the EM algorithm that justifies incremental, sparse, and other variants
Learning in graphical models
Statistics and Computing
Training Invariant Support Vector Machines
Machine Learning
Training products of experts by minimizing contrastive divergence
Neural Computation
A minimum description length framework for unsupervised learning
A minimum description length framework for unsupervised learning
A fast learning algorithm for deep belief nets
Neural Computation
Restricted Boltzmann machines for collaborative filtering
Proceedings of the 24th international conference on Machine learning
On the quantitative analysis of deep belief networks
Proceedings of the 25th international conference on Machine learning
Training restricted Boltzmann machines using approximations to the likelihood gradient
Proceedings of the 25th international conference on Machine learning
Extracting and composing robust features with denoising autoencoders
Proceedings of the 25th international conference on Machine learning
Using fast weights to improve persistent contrastive divergence
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Herding dynamical weights to learn
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Learning Deep Architectures for AI
Foundations and Trends® in Machine Learning
Unsupervised learning of feature hierarchies
Unsupervised learning of feature hierarchies
Learning methods for generic object recognition with invariance to pose and lighting
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images
IEEE Transactions on Pattern Analysis and Machine Intelligence
Acoustic Modeling Using Deep Belief Networks
IEEE Transactions on Audio, Speech, and Language Processing
Spike-timing-dependent construction
Neural Computation
Hi-index | 0.00 |
We present a new learning algorithm for Boltzmann machines that contain many layers of hidden variables. Data-dependent statistics are estimated using a variational approximation that tends to focus on a single mode, and data-independent statistics are estimated using persistent Markov chains. The use of two quite different techniques for estimating the two types of statistic that enter into the gradient of the log likelihood makes it practical to learn Boltzmann machines with multiple hidden layers and millions of parameters. The learning can be made more efficient by using a layer-by-layer pretraining phase that initializes the weights sensibly. The pretraining also allows the variational inference to be initialized sensibly with a single bottom-up pass. We present results on the MNIST and NORB data sets showing that deep Boltzmann machines learn very good generative models of handwritten digits and 3D objects. We also show that the features discovered by deep Boltzmann machines are a very effective way to initialize the hidden layers of feedforward neural nets, which are then discriminatively fine-tuned.