Parallel implementation of a spatio-temporal visual saliency model

  • Authors:
  • A. Rahman;D. Houzet;D. Pellerin;S. Marat;N. Guyader

  • Affiliations:
  • GIPSA-lab, Grenoble, France;GIPSA-lab, Grenoble, France;GIPSA-lab, Grenoble, France;GIPSA-lab, Grenoble, France;GIPSA-lab, Grenoble, France

  • Venue:
  • Journal of Real-Time Image Processing
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

The human vision has been studied deeply in the past years, and several different models have been proposed to simulate it on computer. Some of these models concerns visual saliency which is potentially very interesting in a lot of applications like robotics, image analysis, compression, video indexing. Unfortunately they are compute intensive with tight real-time requirements. Among all the existing models, we have chosen a spatio-temporal one combining static and dynamic information. We propose in this paper a very efficient implementation of this model with multi-GPU reaching real-time. We present the algorithms of the model as well as several parallel optimizations on GPU with higher precision and execution time results. The real-time execution of this multi-path model on multi-GPU makes it a powerful tool to facilitate many vision related applications.