Evolving classifiers on field programmable gate arrays: migrating XCS to FPGAs
Journal of Systems Architecture: the EUROMICRO Journal - Special issue: Nature-inspired applications and systems
Self-reconfigurable software architecture: design and implementation
Computers and Industrial Engineering - Special issue: Computational intelligence and information technology applications to industrial engineering selected papers from the 33 rd ICC&IE
System-on-programmable-chip implementation for on-line face recognition
Pattern Recognition Letters
Cancer gene search with data-mining and genetic algorithms
Computers in Biology and Medicine
Suitability of different neural networks in daily flow forecasting
Applied Soft Computing
Expert Systems with Applications: An International Journal
A system for processing handwritten bank checks automatically
Image and Vision Computing
Artificial neural networks: a review of commercial hardware
Engineering Applications of Artificial Intelligence
Mathematical and Computer Modelling: An International Journal
A general regression neural network
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
This article presents a series of hardware implementations of a general regression neural network (GRNN) using FPGAs. The paper describes the study of this neural network using different fixed and floating point implementations. The implementation includes training as well as testing of the network. It is focused on precision loss and area and speed results of the resulting neural network coprocessor that can be used in a System on Programmable Chip. A GRNN is able to approximate functions and it has been used in control, prediction, fault diagnosis, engine management among others. They are mainly implemented as software entities because they require a great amount of complex mathematical operations. With the increasing power and capabilities of current FPGAs, now it is possible not only to translate them into hardware but, due to the reconfigurable feature of these devices, to explore different hardware/software partitions as well. These hardware implementations increase both the speed and performance of these neural networks and the designer can select the area-speed trade-off that best fits the application.