Performance analysis of bit-width reduced floating-point arithmetic units in FPGAs: a case study of neural network-based face detector

  • Authors:
  • Yongsoon Lee;Younhee Choi;Seok-Bum Ko;Moon Ho Lee

  • Affiliations:
  • Electrical and Computer Engineering Department, University of Saskatchewan, Saskatoon, SK, Canada;Electrical and Computer Engineering Department, University of Saskatchewan, Saskatoon, SK, Canada;Electrical and Computer Engineering Department, University of Saskatchewan, Saskatoon, SK, Canada;Institute of Information and Communication, Chonbuk National University, Jeonju, South Korea

  • Venue:
  • EURASIP Journal on Embedded Systems - FPGA supercomputing platforms, architectures, and techniques for accelerating computationally complex algorithms
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper implements a field programmable gate array- (FPGA-) based face detector using a neural network (NN) and the bit-width reduced floating-point arithmetic unit (FPU). The analytical error model, using the maximum relative representation error (MRRE) and the average relative representation error (ARRE), is developed to obtain the maximum and average output errors for the bit-width reduced FPUs. After the development of the analytical error model, the bit-width reduced FPUs and an NN are designed using MATLAB and VHDL. Finally, the analytical (MATLAB) results, along with the experimental (VHDL) results, are compared. The analytical results and the experimental results show conformity of shape. We demonstrate that incremented reductions in the number of bits used can produce significant cost reductions including area, speed, and power.