Automatic confidence adjustment of visual cues in model-based camera tracking

  • Authors:
  • Hanhoon Park;Jihyun Oh;Byung-Kuk Seo;Jong-Il Park

  • Affiliations:
  • -;-;-;Department of Electronics and Computer Engineering, 17 Haengdang-dong, Seongdong-gu, Seoul, 133–791, Korea.

  • Venue:
  • Computer Animation and Virtual Worlds - VRCAI 08
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Model-based camera tracking is a technology that estimates a precise camera pose based on visual cues (e.g., feature points, edges) extracted from camera images given a 3D scene model and a rough camera pose. This paper proposes an automatic method for flexibly adjusting the confidence of visual cues in model-based camera tracking. The adjustment is based on the conditions of the target object-scene and the reliability of the initial or previous camera pose. Under uncontrolled or less-controlled working environments, the proposed object-adaptive tracking method works flexibly at 20 frames per second on an ultra mobile personal computer (UMPC) with an average tracking error within 3 pixels when the camera image resolution is 320 by 240 pixels. This capability enabled the proposed method to be successfully applied to a mobile augmented reality (AR) guidance system for a museum. Copyright © 2009 John Wiley & Sons, Ltd. Object-adaptive camera tracking. The red wire lines represent the 3D graphic model of the objects. The first-row images are the initial tracking results by ultrasonic and inertial sensors. The second-, third-, and fourth-row images are the results when the η values are 0, 0.3, and 1, respectively. The images marked with black boxes are the results by the object-adaptive tracking method, where the η values are automatically adjusted to the optimal value for each object.