From rendering to tracking point-based 3D models

  • Authors:
  • Christophe Dehais;Géraldine Morin;Vincent Charvillat

  • Affiliations:
  • IRIT-ENSEEIHT, 2 rue Charles Camichel, B.P. 7122, 31071 Toulouse Cedex, France;IRIT-ENSEEIHT, 2 rue Charles Camichel, B.P. 7122, 31071 Toulouse Cedex, France;IRIT-ENSEEIHT, 2 rue Charles Camichel, B.P. 7122, 31071 Toulouse Cedex, France

  • Venue:
  • Image and Vision Computing
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper adds to the abundant visual tracking literature with two main contributions. First, we illustrate the interest of using Graphic Processing Units (GPU) to support efficient implementations of computer vision algorithms, and secondly, we introduce the use of point-based 3D models as a shape prior for real-time 3D tracking with a monocular camera. The joint use of point-based 3D models together with GPU allows to adapt and simplify an existing tracking algorithm originally designed for triangular meshes. Point-based models are of particular interest in this context, because they are the direct output of most laser scanners. We show that state-of-the-art techniques developed for point-based rendering can be used to compute in real-time intermediate values required for visual tracking. In particular, apparent motion predictors at each pixel are computed in parallel, and novel views of the tracked object are generated online to help wide-baseline matching. Both computations derive from the same general surface splatting technique which we implement, along with other low-level vision tasks, on the GPU, leading to a real-time tracking algorithm.