Feature discovery by competitive learning
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Learning and relearning in Boltzmann machines
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Perceptrons: An Introduction to Computational Geometry
Perceptrons: An Introduction to Computational Geometry
An Adaptive Associative Memory Principle
IEEE Transactions on Computers
Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images
IEEE Transactions on Pattern Analysis and Machine Intelligence
Existence and stability of equilibria of the continuous-time Hopfield neural network
Journal of Computational and Applied Mathematics
Hi-index | 0.98 |
Learning in neural networks is usually identified with alterations in the networks connections. Learning rules perform these alterations gradually and are based on the deviation between the desired and effective responses to a stimulus. Learning can also be accomplished by synthesis methods, which determine the connections directly, without incurring the cost of gradual training. We introduce a synthesis method for binary Hopfield neural networks according to which learning is viewed as an optimization problem. A theory for concept representation is developed and synthesis criteria, used to define the optimization problems objective function and constraints, are presented. Experimental results are provided based on the use of simulated annealing to solve the optimization problem.