Information processing in dynamical systems: foundations of harmony theory
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Independent component analysis, a new concept?
Signal Processing - Special issue on higher order statistics
Statistics and Computing
Training products of experts by minimizing contrastive divergence
Neural Computation
Information Theory, Inference & Learning Algorithms
Information Theory, Inference & Learning Algorithms
A fast learning algorithm for deep belief nets
Neural Computation
Backpropagation applied to handwritten zip code recognition
Neural Computation
On the quantitative analysis of deep belief networks
Proceedings of the 25th international conference on Machine learning
Training restricted Boltzmann machines using approximations to the likelihood gradient
Proceedings of the 25th international conference on Machine learning
Deep, narrow sigmoid belief networks are universal approximators
Neural Computation
Learning deep generative models
Learning deep generative models
On deep generative models with applications to recognition
CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
Hi-index | 0.00 |
Statistical models of natural images provide an important tool for researchers in the fields of machine learning and computational neuroscience. The canonical measure to quantitatively assess and compare the performance of statistical models is given by the likelihood. One class of statistical models which has recently gained increasing popularity and has been applied to a variety of complex data is formed by deep belief networks. Analyses of these models, however, have often been limited to qualitative analyses based on samples due to the computationally intractable nature of their likelihood. Motivated by these circumstances, the present article introduces a consistent estimator for the likelihood of deep belief networks which is computationally tractable and simple to apply in practice. Using this estimator, we quantitatively investigate a deep belief network for natural image patches and compare its performance to the performance of other models for natural image patches. We find that the deep belief network is outperformed with respect to the likelihood even by very simple mixture models.