FPGA-based image processing for omnidirectional vision on mobile robots

  • Authors:
  • Jones Yudi Mori;Daniel Muñoz Arboleda;Janier Arias Garcia;Carlos Llanos Quintero;José Motta

  • Affiliations:
  • University of Brasilia, Brasilia, Brazil;University of Brasilia, Brasilia, Brazil;University of Brasilia, Brasilia, Brazil;University of Brasilia, Brasilia, Brazil;University of Brasilia, Brasilia, Brazil

  • Venue:
  • Proceedings of the 24th symposium on Integrated circuits and systems design
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Omnidirectional vision systems have been used for mobile robots localization and navigation, taking advantage of a panoramic view for detecting objects. This paper presents a pipelined hardware architecture for image processing using a low cost omnidirectional vision system, which was calibrated using a three-order polynomial interpolation. An Altera Cyclone II FPGA device was used for implementing the hardware architectures, which have been described in both VHDL and Verilog hardware description languages. The entire system is composed of a spherical mirror, an 800x480 pixels camera connected to the FPGA, a spatial convolution filter for edge enhancement, a hardware architecture for estimating the distance of real objects and a touch screen display as user interface. Synthesis results point out that the image processing algorithms are effectively implemented in hardware whereas test results demonstrate that the proposed hardware architectures in FPGAs are suitable for mobile robot applications in which the distance among objects or other robots must be computed. Execution time results demonstrate that the proposed hardware architecture achieves a speed-up factor of 61 in comparison with a desktop software solution based on a xPC Target real time operating system.