Tactic-based motion modeling and multi-sensor tracking

  • Authors:
  • Yang Gu

  • Affiliations:
  • Computer Science Department, Carnegie Mellon University, Pittsburgh, PA

  • Venue:
  • AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 3
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Tracking in essence consists of using sensory information combined with a motion model to estimate the position of a moving object. Tracking efficiency completely depends on the accuracy of the motion model and of the sensory information. For a vision sensor like a camera, the estimation is translated into a command to guide the camera where to look. In this paper, we contribute a method to achieve efficient tracking through using a tactic-based motion model, combined vision and infrared sensory information. We use a supervised learning technique to map the state being tracked to the commands that lead the camera to consistently track the object. We present the probabilistic algorithms in detail and present empirical results both in simulation experiment and from their effective execution in a Segway RMP robot.