Virtual Environments: A hybrid tracking method for surgical augmented reality

  • Authors:
  • Jan Fischer;Michael Eichler;Dirk Bartz;Wolfgang Straíer

  • Affiliations:
  • WSI/GRIS - VCM, University of Tübingen, 72076 Tübingen, Germany;WSI/GRIS - VCM, University of Tübingen, 72076 Tübingen, Germany;WSI/GRIS - VCM, University of Tübingen, 72076 Tübingen, Germany;WSI/GRIS - VCM, University of Tübingen, 72076 Tübingen, Germany

  • Venue:
  • Computers and Graphics
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Camera pose estimation is one of the most important, but also one of the most challenging tasks in augmented reality. Without a highly accurate estimation of the position and orientation of the digital video camera, it is impossible to render a spatially correct overlay of graphical information. This requirement is even more crucial in medical applications, where virtual objects typically have to be correctly aligned with the patient. Many experimental AR systems use specialized tracking devices, which usually are not certified for medical settings. We have developed an AR framework for surgical applications based on existing medical equipment. A surgical navigation device delivers tracking information measured by a built-in infrared camera system, which is the basis for the pose estimation of the AR video camera. However, depending on the conditions in the environment, this infrared pose data can contain discernible tracking errors. One main drawback of the medical tracking device is the fact that, while it delivers a very high positional accuracy, the reported camera orientation can contain a relatively large error. In this article, we present a hybrid tracking scheme for medical augmented reality based on a certified medical tracking system. The final pose estimation takes the initial infrared tracking data as well as salient features in the camera image into account. The vision-based component of the tracking algorithm relies on a pre-defined graphical model of the observed scene. The infrared and vision-based tracking data are tightly integrated into a unified pose estimation algorithm. This algorithm is based on an iterative numerical optimization method. We describe an implementation of the algorithm and present experimental data showing that our new method is capable of delivering a more accurate pose estimation.