Almost optimal lower bounds for small depth circuits
STOC '86 Proceedings of the eighteenth annual ACM symposium on Theory of computing
Learning continuous attractors in recurrent networks
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Training products of experts by minimizing contrastive divergence
Neural Computation
Principled Hybrids of Generative and Discriminative Models
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1
A fast learning algorithm for deep belief nets
Neural Computation
An empirical evaluation of deep architectures on problems with many factors of variation
Proceedings of the 24th international conference on Machine learning
Restricted Boltzmann machines for collaborative filtering
Proceedings of the 24th international conference on Machine learning
Separating the polynomial-time hierarchy by oracles
SFCS '85 Proceedings of the 26th Annual Symposium on Foundations of Computer Science
A unified architecture for natural language processing: deep neural networks with multitask learning
Proceedings of the 25th international conference on Machine learning
Classification using discriminative restricted Boltzmann machines
Proceedings of the 25th international conference on Machine learning
Extracting and composing robust features with denoising autoencoders
Proceedings of the 25th international conference on Machine learning
Deep learning via semi-supervised embedding
Proceedings of the 25th international conference on Machine learning
Unsupervised Learning of Probabilistic Grammar-Markov Models for Object Categories
IEEE Transactions on Pattern Analysis and Machine Intelligence
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Deep learning from temporal coherence in video
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Exploring Strategies for Training Deep Neural Networks
The Journal of Machine Learning Research
Justifying and generalizing contrastive divergence
Neural Computation
Learning Deep Architectures for AI
Foundations and Trends® in Machine Learning
Semi-Supervised Learning
Asymptotic statistical theory of overtraining and cross-validation
IEEE Transactions on Neural Networks
The Journal of Machine Learning Research
A connection between score matching and denoising autoencoders
Neural Computation
Stacked convolutional auto-encoders for hierarchical feature extraction
ICANN'11 Proceedings of the 21th international conference on Artificial neural networks - Volume Part I
On the expressive power of deep architectures
ALT'11 Proceedings of the 22nd international conference on Algorithmic learning theory
Kernel Analysis of Deep Networks
The Journal of Machine Learning Research
Random search for hyper-parameter optimization
The Journal of Machine Learning Research
Learning algorithms for the classification restricted Boltzmann machine
The Journal of Machine Learning Research
Neural Networks
CoNet: feature generation for multi-view semi-supervised learning with partially observed views
Proceedings of the 21st ACM international conference on Information and knowledge management
Tikhonov-Type regularization for restricted boltzmann machines
ICANN'12 Proceedings of the 22nd international conference on Artificial Neural Networks and Machine Learning - Volume Part I
Self-organized reservoirs and their hierarchies
ICANN'12 Proceedings of the 22nd international conference on Artificial Neural Networks and Machine Learning - Volume Part I
Learning two-layer contractive encodings
ICANN'12 Proceedings of the 22nd international conference on Artificial Neural Networks and Machine Learning - Volume Part I
A system for offline character recognition using auto-encoder networks
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part IV
An abstract deep network for image classification
AI'12 Proceedings of the 25th Australasian joint conference on Advances in Artificial Intelligence
Enhanced gradient for training restricted boltzmann machines
Neural Computation
Hi-index | 0.00 |
Much recent research has been devoted to learning algorithms for deep architectures such as Deep Belief Networks and stacks of auto-encoder variants, with impressive results obtained in several areas, mostly on vision and language data sets. The best results obtained on supervised learning tasks involve an unsupervised learning component, usually in an unsupervised pre-training phase. Even though these new algorithms have enabled training deep models, many questions remain as to the nature of this difficult learning problem. The main question investigated here is the following: how does unsupervised pre-training work? Answering this questions is important if learning in deep architectures is to be further improved. We propose several explanatory hypotheses and test them through extensive simulations. We empirically show the influence of pre-training with respect to architecture depth, model capacity, and number of training examples. The experiments confirm and clarify the advantage of unsupervised pre-training. The results suggest that unsupervised pre-training guides the learning towards basins of attraction of minima that support better generalization from the training data set; the evidence from these results supports a regularization explanation for the effect of pre-training.