A massively parallel architecture for a self-organizing neural pattern recognition machine
Computer Vision, Graphics, and Image Processing
Analog VLSI and neural systems
Analog VLSI and neural systems
Self-organization and associative memory: 3rd edition
Self-organization and associative memory: 3rd edition
Winner-take-all networks of O(N) complexity
Advances in neural information processing systems 1
An analog self-organizing neural network chip
Advances in neural information processing systems 1
Introduction to the theory of neural computation
Introduction to the theory of neural computation
Pulse-firing neural chips for hundreds of neurons
Advances in neural information processing systems 2
CMOS UV-writable non-volatile analog storage
Proceedings of the 1991 University of California/Santa Cruz conference on Advanced research in VLSI
An introduction to neural and electronic networks
An analog VLSI splining network
NIPS-3 Proceedings of the 1990 conference on Advances in neural information processing systems 3
Relaxation networks for large supervised learning problems
NIPS-3 Proceedings of the 1990 conference on Advances in neural information processing systems 3
Soft competitive adaptation: neural network learning algorithms based on fitting statistical mixtures
Parallel digital implementations of neural networks
Parallel digital implementations of neural networks
Accurate and precise computation using analog VLSI, with applications to computer graphics and neural networks
Learning capacitive weights in analog CMOS neural networks
Journal of VLSI Signal Processing Systems - Special issue on the Canadian conference on VLSI
An analog CMOS implementation of a Kohonen network with learning capability
Proceeding of an international workshop on VLSI for neural networks and artificial intelligence
Neural Information Processing and VLSI
Neural Information Processing and VLSI
Summed Weight Neuron Perturbation: An O(N) Improvement Over Weight Perturbation
Advances in Neural Information Processing Systems 5, [NIPS Conference]
A Fast Stochastic Error-Descent Algorithm for Supervised Learning and Optimization
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Analog VLSI Implementation of Gradient Descent
Advances in Neural Information Processing Systems 5, [NIPS Conference]
An Analog VLSI Chip for Radial Basis Functions
Advances in Neural Information Processing Systems 5, [NIPS Conference]
A Parallel Gradient Descent Method for Learning in Analog VLSI Neural Networks
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Tolerance to analog hardware of on-chip learning in backpropagation networks
IEEE Transactions on Neural Networks
Applying Learning by Examples for Digital Design Automation
Applied Intelligence
Low Power Programmable Current Mode Circuits
Analog Integrated Circuits and Signal Processing
IEEE Transactions on Neural Networks
Efficient pipelined architecture for competitive learning
Journal of Parallel and Distributed Computing
Hi-index | 0.00 |
An investigation is made concerning implementations of competitivelearning algorithms in analog VLSI circuits and systems. Analog and lowpower digital circuits for competitive learning are currently important fortheir applications in computationally-efficient speech and image compressionby vector quantization, as required for example in portable multi-mediaterminals. A summary of competitive learning models is presented to indicatethe type of VLSI computations required, and the effects of weightquantization are discussed. Analog circuit representations of computationalprimitives for learning and evaluation of distortion metrics are discussed.The present state of VLSI implementations of hard and soft competitivelearning algorithms are described, as well as those for topological featuremaps. Tolerance of learning algorithms to observed analog circuit propertiesis reported. New results are also presented from simulations offrequency-sensitive and soft competitive learning concerning sensitivity ofthese algorithms to precision in VLSI learning computations. Applications ofthese learning algorithms to unsupervised feature extraction and to vectorquantization of speech and images are also described.