A Framework for 3D Model-Based Visual Tracking Using a GPU-Accelerated Particle Filter

  • Authors:
  • James Anthony Brown;David W. Capson

  • Affiliations:
  • McMaster University, Hamilton;McMaster University, Hamilton

  • Venue:
  • IEEE Transactions on Visualization and Computer Graphics
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

A novel framework for acceleration of particle filtering approaches to 3D model-based, markerless visual tracking in monocular video is described. Specifically, we present a methodology for partitioning and mapping the computationally expensive weight-update stage of a particle filter to a graphics processing unit (GPU) to achieve particle- and pixel-level parallelism. Nvidia CUDA and Direct3D are employed to harness the massively parallel computational power of modern GPUs for simulation (3D model rendering) and evaluation (segmentation, feature extraction, and weight calculation) of hundreds of particles at high speeds. The proposed framework addresses the computational intensity that is intrinsic to all particle filter approaches, including those that have been modified to minimize the number of particles required for a particular task. Performance and tracking quality results for rigid object and articulated hand tracking experiments demonstrate markerless, model-based visual tracking on consumer-grade graphics hardware with pixel-level accuracy up to 95 percent at 60+ frames per second. The framework accelerates particle evaluation up to 49 times over a comparable CPU-only implementation, providing an increased particle count while maintaining real-time frame rates.