Visually Guided Cooperative Robot Actions Based on Information Quality

  • Authors:
  • Vivek A. Sujan;Steven Dubowsky

  • Affiliations:
  • Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, USA 02139;Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, USA 02139

  • Venue:
  • Autonomous Robots
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

In field environments it is not usually possible to provide robots in advance with valid geometric models of its environment and task element locations. The robot or robot teams need to create and use these models to locate critical task elements by performing appropriate sensor based actions. This paper presents a multi-agent algorithm for a manipulator guidance task based on cooperative visual feedback in an unknown environment. First, an information-based iterative algorithm to intelligently plan the robot's visual exploration strategy is used to enable it to efficiently build 3D models of its environment and task elements. The algorithm uses the measured scene information to find the next camera position based on expected new information content of that pose. This is achieved by utilizing a metric derived from Shannon's information theory to determine optimal sensing poses for the agent(s) mapping a highly unstructured environment. Second, after an appropriate environment model has been built, the quality of the information content in the model is used to determine the constraint-based optimum view for task execution. The algorithm is applicable for both an individual agent as well as multiple cooperating agents. Simulation and experimental demonstrations on a cooperative robot platform performing a two component insertion/mating task in the field show the effectiveness of this algorithm.