Analog VLSI Circuits for Competitive Learning Networks
Analog Integrated Circuits and Signal Processing - Special issue: cellular neural networks and analog VLSI
Biologically-Inspired On-Chip Learning in Pulsed Neural Networks
Analog Integrated Circuits and Signal Processing - Special issue on Learning on Silicon
Analog Integrated Circuits and Signal Processing - Special issue on Learning on Silicon
Mixed-Mode Programmable and Scalable Architecture for On-Chip Learning
Analog Integrated Circuits and Signal Processing - Special issue on Learning on Silicon
An On-Chip BP Learning Neural Network with Ideal Neuron Characteristics and Learning Rate Adaptation
Analog Integrated Circuits and Signal Processing
Analog VLSI Implementation of Artificial Neural Networks with Supervised On-Chip Learning
Analog Integrated Circuits and Signal Processing
An Experimental Analog CMOS Self-Learning Chip
MICRONEURO '99 Proceedings of the 7th International Conference on Microelectronics for Neural, Fuzzy and Bio-Inspired Systems
Effects of Analog-VLSI hardware on the performance of the LMS algorithm
ICANN'06 Proceedings of the 16th international conference on Artificial Neural Networks - Volume Part I
Hi-index | 0.00 |
In this paper we present results of simulations performed assuming both forward and backward computation are done on-chip using analog components. Aspects of analog hardware studied are component variability, limited voltage ranges, components (multipliers) that only approximate the computations in the backpropagation algorithm, and capacitive weight decay. It is shown that backpropagation networks can learn to compensate for all these shortcomings of analog circuits except for zero offsets, and the latter are correctable with minor circuit complications. Variability in multiplier gains is not a problem, and learning is still possible despite limited voltage ranges and function approximations. Fixed component variation from fabrication is shown to be less detrimental to learning than component variation due to noise. Weight decay is tolerable provided it is sufficiently small, which implies frequent refreshing by rehearsal on the training data or modest cooling of the circuits. The former approach allows for learning nonstationary problem sets