The capacity of the Hopfield associative memory
IEEE Transactions on Information Theory
Distributed associative memory for use in scene analysis
Image and Vision Computing
Iterative Fisher/minimum variance optical classifier
Pattern Recognition
On the Effect of Noise on the Moore-Penrose Generalized Inverse Associative Memory
IEEE Transactions on Pattern Analysis and Machine Intelligence
Mapping Correlation Matrix Memory Applications onto a Beowulf Cluster
ICANN '01 Proceedings of the International Conference on Artificial Neural Networks
Sparse distributed memory using N-of-M codes
Neural Networks
A fuzzy classifier based on correlation matrix memories
FS'09 Proceedings of the 10th WSEAS international conference on Fuzzy systems
Improved AURA k-Nearest Neighbour Approach
IWANN '03 Proceedings of the 7th International Work-Conference on Artificial and Natural Neural Networks: Part II: Artificial Neural Nets Problem Solving Methods
Improved Storage Capacity in Correlation Matrix Memories Storing Fixed Weight Codes
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part I
A hardware-accelerated novel IR system
EUROMICRO-PDP'02 Proceedings of the 10th Euromicro conference on Parallel, distributed and network-based processing
Hi-index | 0.00 |
We distinguish between many:1 (distortion-invariant) and 1:1 (large class) pattern recognition associative processors (with many different input keys associated with the same output recollection vector and with each key associated with a different recollection vector). A variety of different associative processor synthesis algorithms are compared showing that one can: store M vector pairs (where M N, and N is the dimension of the keys) in fewer memory elements than standard digital storage requires; handle linearly dependent key vectors; and achieve robust noise performance and quantization by design. We show that one must employ new recollection vector encoding techniques to improve storage density, else the standard direct storage nearest neighbor processor is preferable. We find Ho-Kashyap associative processors and L-max recollection vector encoding to be preferable and we suggest new and preferable performance measures for associative processors.