IEEE Spectrum
Analog VLSI and neural systems
Analog VLSI and neural systems
A stochastic version of the delta rule
CNLS '89 Proceedings of the ninth annual international conference of the Center for Nonlinear Studies on Self-organizing, Collective, and Cooperative Phenomena in Natural and Artificial Computing Networks on Emergent computation
Using and designing massively parallel computers for artificial neural networks
Journal of Parallel and Distributed Computing - Special issue on neural computing on massively parallel processing
A hybrid chip set architecture for artificial neural network systems with on-chip learning and refreshing
A scalable analog architecture for neural networks with on-chip learning and refreshing
GLSVLSI '95 Proceedings of the Fifth Great Lakes Symposium on VLSI (GLSVLSI'95)
CMOS current-mode neural associative memory design with on-chip learning
IEEE Transactions on Neural Networks
A MOS circuit for a nonmonotonic neural network with excellent retrieval capabilities
IEEE Transactions on Neural Networks
An analog VLSI recurrent neural network learning a continuous-time trajectory
IEEE Transactions on Neural Networks
Digitally programmable analog building blocks for the implementation of artificial neural networks
IEEE Transactions on Neural Networks
A four-quadrant subthreshold mode multiplier for analog neural-network applications
IEEE Transactions on Neural Networks
Tolerance to analog hardware of on-chip learning in backpropagation networks
IEEE Transactions on Neural Networks
On neural networks for analog to digital conversion
IEEE Transactions on Neural Networks
On the properties of the feedforward method: A simple training law for on-chip learning
IEEE Transactions on Neural Networks
Analog VLSI Implementation of Artificial Neural Networks with Supervised On-Chip Learning
Analog Integrated Circuits and Signal Processing
Hi-index | 0.00 |
Typical analog VLSI architectures for on-chip learning are limited in functionality, and scale poorly under variable problem size. We present a scalable hybrid analog-digital architecture for backpropagation learning in multilayer feedforward neural networks, which integrates the flexible functionality and programmability of digital control functions with the efficiency of analog parallel neural computation. The architecture is fully scalable, both in the parallel analog functions of forward and backward signal propagation through synaptic and neural functional units (SynMod and NeuMod), and in the global and local digital functions controlling recall, learning, initialization, monitoring and built-in test. The architecture includes local provisions for long-term weight storage using refresh, which is transparent to the functional operation both during recall and learning. “Refresh While Learning” (RWL) provides a means to compensate for the finite precision of the quantized analog weights during learning. We include simulation results for a network of 32×32 neurons, mapped in parallel onto a MassPar computational engine, which validate the functionality of the architecture on simple character recognition tasks, and demonstrate robust operation of the trained network under 4-bit quantization of the weights owing to the RWL technique.