An FPGA-based face detector using neural network and a scalable floating point unit

  • Authors:
  • Yongsoon Lee;Seok-Bum Ko

  • Affiliations:
  • Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, Canada;Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, Canada

  • Venue:
  • CSECS'06 Proceedings of the 5th WSEAS International Conference on Circuits, Systems, Electronics, Control & Signal Processing
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

The study implemented an FPGA-based face detector using Neural Networks and a scalable Floating Point arithmetic Unit (FPU). The FPU provides dynamic range and reduces the bit of the arithmetic unit more than fixed point method does. These features led to reduction in the memory so that it is efficient for neural networks system with large size data bits. The arithmetic unit occupies 39∼45% of the total neural networks system area. Therefore bits reduction is needed not only for memory but also for a FPU and system size. Reduction from FPU 32 bits (IEEE 754 single precision) to 16 bits reduced the size of memory and arithmetic units by 50%, having only 1.25% deterioration in the detection rate. In order to determine the least and acceptable bits of the FPU, we examined how representation errors affect a detection rate through the MRRE. The scalable FPU and the error analysis may be useful to determine the details, especially area and speed of FPU for the embedded neural network system.