Distributed associative memory for use in scene analysis
Image and Vision Computing
The logic of connectionist systems
Neural computing architectures
A probabilistic logic neuron network for associative learning
Neural computing architectures
Neural computing: an introduction
Neural computing: an introduction
An introduction to neural computing
An introduction to neural computing
Adaptive critic for sigma-pi networks
Neural Networks
RAM-Based Neural Networks
Seeing is Believing: Depictive Neuromodeling of Visual Awareness
IWANN '01 Proceedings of the 6th International Work-Conference on Artificial and Natural Neural Networks: Connectionist Models of Neurons, Learning Processes and Artificial Intelligence-Part I
Intersensorial summation as a nonlinear contribution to cerebral excitation
IWANN'03 Proceedings of the Artificial and natural neural networks 7th international conference on Computational methods in neural modeling - Volume 1
Hi-index | 0.00 |
A Generalising Random Access Memory (G-RAM) neuron is distinguished from conventional neuron models by the fact that its tolerance to departures in similarity from its training pattern is variable. Details of this are given in this paper as it affects the behaviour a class of digital probabilistic neural networks which have been achieving attention in the neural networks literature for some years now. Such systems are also called n-tuple systems, weightless systems or p-RAM systems. After reviewing the literature on such networks, a novel simple combinatoric analysis of the most likely behaviour of recursive GRAM networks is described. The best network performance, measured by a key parameter called 'radius of retrievability' (first defined by Wong and Sherrington [J. Phys. A 22 (1989) 2233] as the error in the input that still allows evolution of the dynamic network to the correct attractor state), is obtained with a training set composed of random data patterns. Increasing the size of the training set reduces this radius of retrievability in a predictable manner. Changing the nature of the training set to nonrandom patterns also reduces the radius of retrievability to an extent that we show can be estimated from a measure of the diversity of the elements of the training set (we refer to this as the 'mean intra-set Hamming distance of the training set'). As mentioned earlier the feature of GRAMs (indicated by the G) is that there exists a generalization parameter which determines how far a neuron input vector can stray from a training input for the neuron to respond in the trained way. It is shown that when this generalization parameter is reduced, then the radius of retrievability is also reduced but it is then stable in the face of an increase in size, or change in nature, of the training set. This is a novel prediction of the behaviour of systems and of the robustness of such behaviour in the face of varying the size and correlation properties of the training set.