Efficient Information-based Visual Robotic Mapping in Unstructured Environments

  • Authors:
  • Vivek A. Sujan;Steven Dubowsky

  • Affiliations:
  • Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139, USA;Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139, USA

  • Venue:
  • International Journal of Robotics Research
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

In field environments it is often not possible to provide robot teams with detailed a priori environment and task models. In such unstructured environments, robots will need to create a dimensionally accurate three-dimensional geometric model of its surroundings by performing appropriate sensor actions. However, uncertainties in robot locations and sensing limitations/occlusions make this difficult. A new algorithm, based on iterative sensor planning and sensor redundancy, is proposed to build a geometrically consistent dimensional map of the environment for mobile robots that have articulated sensors. The aim is to acquire new information that leads to more detailed and complete knowledge of the environment. The robot(s) is controlled to maximize geometric knowledge gained of its environment using an evaluation function based on Shannon's information theory. Using the measured and Markovian predictions of the unknown environment, an information theory based metric is maximized to determine a robotic agent's next best view (NBV) of the environment. Data collected at this NBV pose are fused using a Kalman filter statistical uncertainty model to the measured environment map. The process continues until the environment mapping process is complete. The work is unique in the application of information theory to enhance the performance of environment sensing robot agents. It may be used by multiple distributed and decentralized sensing agents for efficient and accurate cooperative environment modeling. The algorithm makes no assumptions of the environment structure. Hence, it is robust to robot failure since the environment model being built is not dependent on any single agent frame, but is set in an absolute reference frame. It accounts for sensing uncertainty, robot motion uncertainty, environment model uncertainty and other critical parameters. It allows for regions of higher interest receiving greater attention by the agents. This algorithm is particularly well suited to unstructured environments, where sensor uncertainty and occlusions are significant. Simulations and experiments show the effectiveness of this algorithm.