Mixed-Mode Programmable and Scalable Architecture for On-Chip Learning

  • Authors:
  • Bassem A. Alhalabi,;Magdy A. Bayoumi;Bassem Maaz

  • Affiliations:
  • Department of Computer Science and Engineering, Florida Atlantic University, Boca Raton, Florida 33431 bassem@cse.fau.edu;The Center for Advanced Computer Studies, The University of Southwestern Louisiana, Lafayette, Louisiana 70504;The Center for Advanced Computer Studies, The University of Southwestern Louisiana, Lafayette, Louisiana 70504

  • Venue:
  • Analog Integrated Circuits and Signal Processing - Special issue on Learning on Silicon
  • Year:
  • 1999

Quantified Score

Hi-index 0.00

Visualization

Abstract

Typical analog VLSI architectures for on-chip learning are limited in functionality, and scale poorly under variable problem size. We present a scalable hybrid analog-digital architecture for backpropagation learning in multilayer feedforward neural networks, which integrates the flexible functionality and programmability of digital control functions with the efficiency of analog parallel neural computation. The architecture is fully scalable, both in the parallel analog functions of forward and backward signal propagation through synaptic and neural functional units (SynMod and NeuMod), and in the global and local digital functions controlling recall, learning, initialization, monitoring and built-in test. The architecture includes local provisions for long-term weight storage using refresh, which is transparent to the functional operation both during recall and learning. “Refresh While Learning” (RWL) provides a means to compensate for the finite precision of the quantized analog weights during learning. We include simulation results for a network of 32×32 neurons, mapped in parallel onto a MassPar computational engine, which validate the functionality of the architecture on simple character recognition tasks, and demonstrate robust operation of the trained network under 4-bit quantization of the weights owing to the RWL technique.