Semi-Autonomous Generation of Appearance-based Edge Models from Image Sequences

  • Authors:
  • Jeremiah Neubert;John Pretlove;Tom Drummond

  • Affiliations:
  • University of North Dakota. e-mail: jeremiah.neubert@und.edu;ABB Research. e-mail: john.pretlove@no.abb.com;Cambridge University. e-mail: twd20@cam.ac.uk

  • Venue:
  • ISMAR '07 Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Many of the robust visual tracking techniques utilized by augmented reality applications rely on 3D models and information extracted from images. Models enhanced with image information make it possible to initialize tracking and detect poor registration. Unfortunately, generating 3D CAD models and registering them to image information can be a time consuming operation. Regularly the process requires multiple trips between the site being modeled and the workstation used to create the model. The system presented in this work eliminates the need for a separately generated 3D model by utilizing modern structure-from-motion techniques to extract the model and associated image information directly from an image sequence. The technique can be implemented on any handheld device instrumented with a camera and network connection. The process of creating the model requires minimal user interaction in the form of a few cues to identify planar regions on the object of interest. In addition the system selects a set of keyframes for each region to capture viewpoint based appearance changes. This work also presents a robust tracking framework to take advantage of these new edge models. Performance of both the modeling technique and the tracking system are verified on several different objects.