Modeling brain function—the world of attractor neural networks
Modeling brain function—the world of attractor neural networks
Universal approximation using radial-basis-function networks
Neural Computation
Modelling adaptation aftereffects in associative memory
Neurocomputing
Robust Object Recognition with Cortex-Like Mechanisms
IEEE Transactions on Pattern Analysis and Machine Intelligence
A canonical neural circuit for cortical nonlinear operations
Neural Computation
IEEE Transactions on Neural Networks
Associative Memory Design Using Support Vector Machines
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Attractor networks are widely believed to underlie the memory systems of animals across different species. Existing models have succeeded in qualitatively modeling properties of attractor dynamics, but their computational abilities often suffer from poor representations for realistic complex patterns, spurious attractors, low storage capacity, and difficulty in identifying attractive fields of attractors. We propose a simple two-layer architecture, gaussian attractor network, which has no spurious attractors if patterns to be stored are uncorrelated and can store as many patterns as the number of neurons in the output layer. Meanwhile the attractive fields can be precisely quantified and manipulated. Equipped with experience-dependent unsupervised learning strategies, the network can exhibit both discrete and continuous attractor dynamics. A testable prediction based on numerical simulations is that there exist neurons in the brain that can discriminate two similar stimuli at first but cannot after extensive exposure to physically intermediate stimuli. Inspired by this network, we found that adding some local feedbacks to a well-known hierarchical visual recognition model, HMAX, can enable the model to reproduce some recent experimental results related to high-level visual perception.