A hardware architecture for real-time object detection using depth and edge information

  • Authors:
  • Christos Kyrkou;Christos Ttofis;Theocharis Theocharides

  • Affiliations:
  • University of Cyprus, Nicosia, Cyprus;University of Cyprus, Nicosia, Cyprus;University of Cyprus, Nicosia, Cyprus

  • Venue:
  • ACM Transactions on Embedded Computing Systems (TECS)
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Emerging embedded 3D vision systems for robotics and security applications utilize object detection to perform video analysis in order to intelligently interact with their host environment and take appropriate actions. Such systems have high performance and high detection-accuracy demands, while requiring low energy consumption, especially when dealing with embedded mobile systems. However, there is a large image search space involved in object detection, primarily because of the different sizes in which an object may appear, which makes it difficult to meet these demands. Hence, it is possible to meet such constraints by reducing the search space involved in object detection. To this end, this article proposes a depth and edge accelerated search method and a dedicated hardware architecture that implements it to provide an efficient platform for generic real-time object detection. The hardware integration of depth and edge processing mechanisms, with a support vector machine classification core onto an FPGA platform, results in significant speed-ups and improved detection accuracy. The proposed architecture was evaluated using images of various sizes, with results indicating that the proposed architecture is capable of achieving real-time frame rates for a variety of image sizes (271 fps for 320 × 240, 42 fps for 640 × 480, and 23 fps for 800 × 600) compared to existing works, while reducing the false-positive rate by 52%.