The :20Brain-state-in-a-box" Neural model is a gradient descent algorithm
Journal of Mathematical Psychology
Analysis of Linsker's simulations of Hebbian rules
Neural Computation
Neural and automata networks: dynamical behavior and applications
Neural and automata networks: dynamical behavior and applications
Stability and optimization analyses of the generalized brain-state-in-a-box neural network model
Journal of Mathematical Psychology
The role of constraints in Hebbian learning
Neural Computation
Neural Computation
Fixed-point attractor analysis for a class of neurodynamics
Neural Computation
A Winner-Take-All Neural Networks of N Linear Threshold Neurons without Self-Excitatory Connections
Neural Processing Letters
Learning in a higher-order simple perceptron
Mathematical and Computer Modelling: An International Journal
Hi-index | 0.00 |
The limiter function is used in many learning and retrieval models as the constraint controlling the magnitude of the weight or state vectors. In this paper, we developed a new method to relate the set of saturated fixed points to the set of system parameters of the models that use the limiter function, and then, as a case study, applied this method to Linsker's Hebbian learning network. We derived a necessary and sufficient condition to test whether a given saturated weight or state vector is stable or not for any given set of system parameters, and used this condition to determine the whole regime in the parameter space over which the given state is stable. This approach allows us to investigate the relative stability of the major receptive fields reported in Linsker's simulations, and to demonstrate the crucial role played by the synaptic density functions.