Parallelizing and optimizing LIP-canny using NVIDIA CUDA

  • Authors:
  • Rafael Palomar;José M. Palomares;José M. Castillo;Joaquín Olivares;Juan Gómez-Luna

  • Affiliations:
  • Department of Computer Architecture, Electronics and Electronic Technology, University of Córdoba;Department of Computer Architecture, Electronics and Electronic Technology, University of Córdoba;Department of Computer Architecture, Electronics and Electronic Technology, University of Córdoba;Department of Computer Architecture, Electronics and Electronic Technology, University of Córdoba;Department of Computer Architecture, Electronics and Electronic Technology, University of Córdoba

  • Venue:
  • IEA/AIE'10 Proceedings of the 23rd international conference on Industrial engineering and other applications of applied intelligent systems - Volume Part III
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

The Canny algorithm is a well known edge detector that is widely used in the previous processing stages in several algorithms related to computer vision. An alternative, the LIP-Canny algorithm, is based on a robust mathematical model closer to the human vision system, obtaining better results in terms of edge detection. In this work we describe LIP-Canny algorithm under the perspective from its parallelization and optimization by using the NVIDIA CUDA framework. Furthermore, we present comparative results between an implementation of this algorithm using NVIDIA CUDA and the analogue using a C/C++ approach.