What is the goal of sensory coding?
Neural Computation
Candid Covariance-Free Incremental Principal Component Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Incremental Hierarchical Discriminant Regression
IEEE Transactions on Neural Networks
Spatio-temporal adaptation in the unsupervised development of networked visual neurons
IEEE Transactions on Neural Networks
Temporal context as cortical spatial codes
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Developmental stereo: topographic iconic-abstract map from top-down connection
ICONIP'08 Proceedings of the 15th international conference on Advances in neuro-information processing - Volume Part I
Hi-index | 0.00 |
Currently, there is a lack of general-purpose in-place learning networks that model feature layers in the cortex. By ''general-purpose'' we mean a general yet adaptive high-dimensional function approximator. In-place learning is a biological concept rooted in the genomic equivalence principle, meaning that each neuron is fully responsible for its own learning in its environment and there is no need for an external learner. Presented in this paper is the Multilayer In-place Learning Network (MILN) for this ambitious goal. Computationally, in-place learning provides unusually efficient learning algorithms whose simplicity, low computational complexity, and generality are set apart from typical conventional learning algorithms. Based on the neuroscience literature, we model the layer 4 and layer 2/3 as the feature layers in the 6-layer laminar cortex, with layer 4 using unsupervised learning and layer 2/3 using supervised learning. As a necessary requirement for autonomous mental development, MILN generates invariant neurons in different layers, with increasing invariance from earlier to later layers and the total invariance in the last motor layer. Such self-generated invariant representation is enabled mainly by descending (top-down) connections. The self-generated invariant representation is used as intermediate representations for learning later tasks in open-ended development.