Operation and modeling of the MOS transistor
Operation and modeling of the MOS transistor
Associative neural memories
Sparse Distributed Memory
VLSI-Compatible Immplementations for Artificial Neural Networks
VLSI-Compatible Immplementations for Artificial Neural Networks
An Introduction to Neural and Electronic Networks
An Introduction to Neural and Electronic Networks
Mixed Mode VLSI Implementation of a Neural Associative Memory
MICRONEURO '99 Proceedings of the 7th International Conference on Microelectronics for Neural, Fuzzy and Bio-Inspired Systems
Memory capacities for synaptic and structural plasticity
Neural Computation
Analog VLSI implementation of adaptive synapses in pulsed neural networks
IWANN'05 Proceedings of the 8th international conference on Artificial Neural Networks: computational Intelligence and Bioinspired Systems
Nonlinear quantization on Hebbian-type associative memories
Applied Intelligence
Neural associative memories and sparse coding
Neural Networks
FPGA-Based architecture for extended associative memories and its application in image recognition
MICAI'12 Proceedings of the 11th Mexican international conference on Advances in Artificial Intelligence - Volume Part I
Hi-index | 0.00 |
A mixed mode digital/analog special purpose VLSI hardware implementation of an associative memory with neural architecture is presented. The memory concept is based on a matrix architecture with binary storage elements holding the connection weights. To enhance the processing speed analog circuit techniques are applied to implement the algorithm for the association. To keep the memory density as high as possible two design strategies are considered. First, the number of transistors per storage element is kept to a minimum. In this paper a circuit technique that uses a single 6-transistor cell for weight storage and analog signal processing is proposed. Second, the device precision has been chosen to a moderate level to save area as much as possible. Since device mismatch limits the performance of analog circuits, the impact of device precision on the circuit performance is explicitly discussed. It is shown that the device precision limits the number of rows activated in parallel. Since the input vector as well as the output vector are considered to be sparsely coded it is concluded, that even for large matrices the proposed circuit technique is appropriate and ultra large scale integration with a large number of connection weights is feasible.