A Quantitative Comparison of the Performance of Three Discrete Distributed Associative Memory Models
IEEE Transactions on Computers
IEEE Transactions on Computers - Fault-Tolerant Computing
Adaptive pattern recognition and neural networks
Adaptive pattern recognition and neural networks
Self-organization and associative memory: 3rd edition
Self-organization and associative memory: 3rd edition
Linear Algebra Approach to Neural Associative Memories and Noise performance of Neural Classifiers
IEEE Transactions on Computers - Special issue on artificial neural networks
Perceptrons: An Introduction to Computational Geometry
Perceptrons: An Introduction to Computational Geometry
Hi-index | 0.14 |
The performance of two commonly used linear models of associative memories, generalized inverse (GI) and correlation matrix memory (CMM) is studied analytically in the presence of a new type of noise (training noise due to noisy training patterns). Theoretical expressions are determined for the S/N ratio gain of the GI and CMM memories in the auto-associative and hetero-associative modes of operation. It is found that the GI method performance degrades significantly in the presence of training noise while the CMM method is relatively unaffected by it. The theoretical expressions are plotted and compared with the results obtained from Monte Carlo simulations and the two are found to be in excellent agreement.