An optimality principle for unsupervised learning
Advances in neural information processing systems 1
Introduction to the theory of neural computation
Introduction to the theory of neural computation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
Phase-based disparity measurement
CVGIP: Image Understanding
Learning to see rotation and dilation with a Hebb rule
NIPS-3 Proceedings of the 1990 conference on Advances in neural information processing systems 3
The role of constraints in Hebbian learning
Neural Computation
The role of weight normalization in competitive learning
Neural Computation
What is the goal of sensory coding?
Neural Computation
Neural Computation
Computing stereo disparity and motion with known binocular cell properties
Neural Computation
Learning Fuzzy Grey Cognitive Maps using Nonlinear Hebbian-based approach
International Journal of Approximate Reasoning
Hi-index | 0.00 |
An intrinsic limitation of linear, Hebbian networks is that they are capable of learning only from the linear pairwise correlations within an input stream. To explore what higher forms of structure could be learned with a nonlinear Hebbian network, we constructed a model network containing a simple form of nonlinearity and we applied it to the problem of learning to detect the disparities present in random-dot stereograms. The network consists of three layers, with nonlinear sigmoidal activation functions in the second-layer units. The nonlinearities allow the second layer to transform the pixel-based representation in the input layer into a new representation based on coupled pairs of left-right inputs. The third layer of the network then clusters patterns occurring on the second-layer outputs according to their disparity via a standard competitive learning rule. Analysis of the network dynamics shows that the second-layer units' nonlinearities interact with the Hebbian learning rule to expand the region over which pairs of left-right inputs are stable. The learning rule is neurobiologically inspired and plausible, and the model may shed light on how the nervous system learns to use coincidence detection in general.