What is the goal of sensory coding?
Neural Computation
Image Representation Using 2D Gabor Wavelets
IEEE Transactions on Pattern Analysis and Machine Intelligence
Preintegration lateral inhibition enhances unsupervised learning
Neural Computation
Training products of experts by minimizing contrastive divergence
Neural Computation
Learning with mixtures of trees
The Journal of Machine Learning Research
Energy-based models for sparse overcomplete representations
The Journal of Machine Learning Research
Non-negative Matrix Factorization with Sparseness Constraints
The Journal of Machine Learning Research
A fast learning algorithm for deep belief nets
Neural Computation
Learning Image Components for Object Recognition
The Journal of Machine Learning Research
A sparse generative model of v1 simple cells with intrinsic plasticity
Neural Computation
Maximal Causes for Non-linear Component Extraction
The Journal of Machine Learning Research
Unsupervised learning of overlapping image components using divisive input modulation
Computational Intelligence and Neuroscience
Generalized softmax networks for non-linear component extraction
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
Role of homeostasis in learning sparse representations
Neural Computation
Research frontier: deep machine learning--a new frontier in artificial intelligence research
IEEE Computational Intelligence Magazine
Predictive coding as a model of the V1 saliency map hypothesis
Neural Networks
A single functional model of drivers and modulators in cortex
Journal of Computational Neuroscience
Hi-index | 0.00 |
A method is presented for learning the reciprocal feedforward and feedback connections required by the predictive coding model of cortical function. When this method is used, feedforward and feedback connections are learned simultaneously and independently in a biologically plausible manner. The performance of the proposed algorithm is evaluated by applying it to learning the elementary components of artificial and natural images. For artificial images, the bars problem is employed, and the proposed algorithm is shown to produce state-of-the-art performance on this task. For natural images, components resembling Gabor functions are learned in the first processing stage, and neurons responsive to corners are learned in the second processing stage. The properties of these learned representations are in good agreement with neurophysiological data from V1 and V2. The proposed algorithm demonstrates for the first time that a single computational theory can explain the formation of cortical RFs and also the response properties of cortical neurons once those RFs have been learned.