Description and generation of spherically invariant speech-model signals
Signal Processing
Learning invariance from transformation sequences
Neural Computation
Toward a theory of the striate cortex
Neural Computation
What is the goal of sensory coding?
Neural Computation
Optimal, unsupervised learning in invariant object recognition
Neural Computation
A non-parametric multi-scale statistical model for natural images
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Proceedings of the 1998 conference on Advances in neural information processing systems II
Probability Models for Clutter in Natural Images
IEEE Transactions on Pattern Analysis and Machine Intelligence
Slow feature analysis: unsupervised learning of invariances
Neural Computation
Information Theory, Inference & Learning Algorithms
Information Theory, Inference & Learning Algorithms
Topographic Product Models Applied to Natural Scene Statistics
Neural Computation
Image compression via joint statistical characterization in the wavelet domain
IEEE Transactions on Image Processing
Bayesian tree-structured image modeling using wavelet-domain hidden Markov models
IEEE Transactions on Image Processing
Image denoising using scale mixtures of Gaussians in the wavelet domain
IEEE Transactions on Image Processing
Class specific redundancies in natural images: a theory of extrastriate visual processing
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Local non-linear interactions in the visual cortex may reflect global decorrelation
Journal of Computational Neuroscience
Hi-index | 0.00 |
Gaussian scale mixture models offer a top-down description of signal generation that captures key bottom-up statistical characteristics of filter responses to images. However, the pattern of dependence among the filters for this class of models is prespecified. We propose a novel extension to the gaussian scale mixturemodel that learns the pattern of dependence from observed inputs and thereby induces a hierarchical representation of these inputs. Specifically, we propose that inputs are generated by gaussian variables (modeling local filter structure), multiplied by a mixer variable that is assigned probabilistically to each input from a set of possible mixers. We demonstrate inference of both components of the generative model, for synthesized data and for different classes of natural images, such as a generic ensemble and faces. For natural images, the mixer variable assignments show invariances resembling those of complex cells in visual cortex; the statistics of the gaussian components of the model are in accord with the outputs of divisive normalization models. We also show how our model helps interrelate a wide range of models of image statistics and cortical processing.