Computer vision and augmented reality for guiding assembly

  • Authors:
  • Jose M. Molineros;Rajeev Sharma

  • Affiliations:
  • The Pennsylvania State University;The Pennsylvania State University

  • Venue:
  • Computer vision and augmented reality for guiding assembly
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

An Augmented Reality (AR) system that can enhance a user's view of the surrounding scene with annotations based on the scene content, has many potential applications. We consider the problem of scene augmentation in the context of a human engaged in assembling an object from its components. In order to exploit AR's potential, two main problems need to be considered. The first key issue is designing an effective augmentation scheme for information presentation and control. The second, providing accurate and robust sensing to determine the state of the surrounding environment. We utilize concepts from robot assembly planning to develop a systematic framework for presenting augmentation stimuli for the assembly domain. An interactive augmentation design and control engine called is described. The engine can be used for developing and visualizing multi-modal augmentation schemes for assembly, as well as controling information presentation in an Augmented Reality system. Its functionality is demonstrated with the help of specific scenarios where the goal is to create effective multi-modal augmentation displays for evaluating assembly. For providing the appropriate augmentation stimulus at the right position and time, an AR system needs some sensor to interpret the surrounding scene. We take an incremental approach using computer vision to address the sensing problem. First, we develop real-time tracking of a marker-based coding scheme. The fiducials are placed on each assembly components to uniquely identify and track them. An experimental prototype system is described, which implements computer vision algorithms for real-time marker-based tracking. Even though fiducials offer relative simplicity and flexibility, the problems of occlusion by the manipulator as well as other assembly parts make the use of more general computer vision techniques desirable. A domain specific problem is the changing shape and appearance of assembly objects as they are connected together. A combination of concepts from robotics assembly planning, model-based object recognition, and image feature classification is used to develop an object recognition scheme in the context of parts being assembled together by a human. Problems in the assembly domain include changing shape and appearance of objects as they are connected together, as well as high occlusion by the manipulator and other objects. Constraints from the domain of assembly, as well as transformation space search-based vision algorithms make the problem tractable. We use this scheme for subassembly recognition, tracking, and assembly state verification.