On learning the past tenses of English verbs
Parallel distributed processing: explorations in the microstructure of cognition, vol. 2
Dynamics and architecture for neural computation
Journal of Complexity - Special Issue on Neural Computation
Self-organization and associative memory: 3rd edition
Self-organization and associative memory: 3rd edition
Connectionist learning procedures
Artificial Intelligence
Deterministic Boltzmann learning performs steepest descent in weight-space
Neural Computation
On the K-winners-take-all-network
Advances in neural information processing systems 1
Feature discovery by competitive learning
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Information processing in dynamical systems: foundations of harmony theory
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Maximum likelihood competitive learning
Advances in neural information processing systems 2
Generalization by weight-elimination with application to forecasting
NIPS-3 Proceedings of the 1990 conference on Advances in neural information processing systems 3
Neural networks and the bias/variance dilemma
Neural Computation
The role of constraints in Hebbian learning
Neural Computation
The role of weight normalization in competitive learning
Neural Computation
What is the goal of sensory coding?
Neural Computation
A multiple cause mixture model for unsupervised learning
Neural Computation
Computational Explorations in Cognitive Neuroscience: Understanding the Mind by Simulating the Brain
Computational Explorations in Cognitive Neuroscience: Understanding the Mind by Simulating the Brain
A minimum description length framework for unsupervised learning
A minimum description length framework for unsupervised learning
The lack of a priori distinctions between learning algorithms
Neural Computation
The existence of a priori distinctions between learning algorithms
Neural Computation
Competition and multiple cause models
Neural Computation
Preintegration lateral inhibition enhances unsupervised learning
Neural Computation
Optimal Extraction of Hidden Causes
ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
The development of cortical models to enable neural-based cognitive architectures
Computational models for neuroscience
Learning classification in the olfactory system of insects
Neural Computation
Hierarchial self-organization of minicolumnar receptive fields
Neural Networks - 2004 Special issue: New developments in self-organizing systems
Journal of Cognitive Neuroscience
How inhibitory oscillations can train neural networks and punish competitors
Neural Computation
Hold your horses: a dynamic computational role for the subthalamic nucleus in decision making
Neural Networks - 2006 Special issue: Neurobiology of decision making
Learning Image Components for Object Recognition
The Journal of Machine Learning Research
Maximal Causes for Non-linear Component Extraction
The Journal of Machine Learning Research
SAL: an explicitly pluralistic cognitive architecture
Journal of Experimental & Theoretical Artificial Intelligence - Pluralism and the Future of Cognitive Science
Generalized softmax networks for non-linear component extraction
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
Systematically grounding language through vision in a deep, recurrent neural network
AGI'11 Proceedings of the 4th international conference on Artificial general intelligence
From an executive network to executive control: A computational model of the n-back task
Journal of Cognitive Neuroscience
Cognitive Systems Research
Inhibition in multiclass classification
Neural Computation
Hi-index | 0.00 |
Computational models in cognitive neuroscience should ideally use biological properties and powerful computational principles to produce behavior consistent with psychological findings. Error-driven backpropagation is computationally powerful and has proven useful for modeling a range of psychological data but is not biologically plausible. Several approaches to implementing backpropagation in a biologically plausible fashion converge on the idea of using bidirectional activation propagation in interactive networks to convey error signals. This article demonstrates two main points about these error-driven interactive networks: (1) they generalize poorly due to attractor dynamics that interfere with the network's ability to produce novel combinatorial representations systematically in response to novel inputs, and (2) this generalization problem can be remedied by adding two widely used mechanistic principles, inhibitory competition and Hebbian learning, that can be independently motivated for a variety of biological, psychological, and computational reasons. Simulations using the Leabra algorithm, which combines the generalized recirculation (GeneRec), biologically plausible, error-driven learning algorithm with inhibitory competition and Hebbian learning, show that these mechanisms can result in good generalization in interactive networks. These results support the general conclusion that cognitive neuroscience models that incorporate the core mechanistic principles of interactivity, inhibitory competition, and error-driven and Hebbian learning satisfy a wider range of biological, psychological, and computational constraints than models employing a subset of these principles.