Hardware architecture for a general regression neural network coprocessor

  • Authors:
  • Jesús Lázaro;Jagoba Arias;Armando Astarloa;Unai Bidarte;Aitzol Zuloaga

  • Affiliations:
  • Department of Electronics and Telecommunications, University of the Basque Country, Alameda Urquijo s/n, 48013 Bilbao, Spain;Department of Electronics and Telecommunications, University of the Basque Country, Alameda Urquijo s/n, 48013 Bilbao, Spain;Department of Electronics and Telecommunications, University of the Basque Country, Alameda Urquijo s/n, 48013 Bilbao, Spain;Department of Electronics and Telecommunications, University of the Basque Country, Alameda Urquijo s/n, 48013 Bilbao, Spain;Department of Electronics and Telecommunications, University of the Basque Country, Alameda Urquijo s/n, 48013 Bilbao, Spain

  • Venue:
  • Neurocomputing
  • Year:
  • 2007

Quantified Score

Hi-index 0.01

Visualization

Abstract

This article presents a series of hardware implementations of a general regression neural network (GRNN) using FPGAs. The paper describes the study of this neural network using different fixed and floating point implementations. The implementation includes training as well as testing of the network. It is focused on precision loss and area and speed results of the resulting neural network coprocessor that can be used in a System on Programmable Chip. A GRNN is able to approximate functions and it has been used in control, prediction, fault diagnosis, engine management among others. They are mainly implemented as software entities because they require a great amount of complex mathematical operations. With the increasing power and capabilities of current FPGAs, now it is possible not only to translate them into hardware but, due to the reconfigurable feature of these devices, to explore different hardware/software partitions as well. These hardware implementations increase both the speed and performance of these neural networks and the designer can select the area-speed trade-off that best fits the application.