What is the goal of sensory coding?
Neural Computation
Modeling visual attention via selective tuning
Artificial Intelligence - Special volume on computer vision
Self-Organizing Maps
Object-based visual attention for computer vision
Artificial Intelligence
Topographic Independent Component Analysis
Neural Computation
VOCUS: A Visual Attention System for Object Detection and Goal-Directed Search (Lecture Notes in Computer Science / Lecture Notes in Artificial Intelligence)
Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems
Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems
A "nonnegative PCA" algorithm for independent component analysis
IEEE Transactions on Neural Networks
Dynamics of Winner-Take-All Competition in Recurrent Neural Networks With Lateral Inhibition
IEEE Transactions on Neural Networks
Neuromorphic Excitable Maps for Visual Processing
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Frequency spectrum modification: a new model for visual saliency detection
ISNN'10 Proceedings of the 7th international conference on Advances in Neural Networks - Volume Part II
Hi-index | 0.00 |
There have been many computational models mimicking the visual cortex that are based on spatial adaptations of unsupervised neural networks. In this paper, we present a new model called neuronal cluster which includes spatial as well as temporal weights in its unified adaptation scheme. The "in-place" nature of the model is based on two biologically plausible learning rules, Hebbian rule and lateral inhibition. We present the mathematical demonstration that the temporal weights are derived from the delay in lateral inhibition. By training with the natural videos, this model can develop spatio-temporal features such as orientation selective cells, motion sensitive cells, and spatio-temporal complex cells. The unified nature of the adaption scheme allows us to construct a multilayered and task-independent attention selection network which uses the same learning rule for edge, motion, and color detection, and we can use this network to engage in attention selection in both static and dynamic scenes.