Training a Vision Guided Mobile Robot

  • Authors:
  • Gordon Wyeth

  • Affiliations:
  • Department of Computer Science and Electrical Engineering, The University of Queensland, Brisbane, QLD 4072, Australia. Email: wyeth@csee.uq.edu.au

  • Venue:
  • Machine Learning - Special issue on learning in autonomous robots
  • Year:
  • 1998

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents the design, implementation and evaluation of atrainable vision guided mobile robot. The robot, CORGI, has a CCD cameraas its only sensor which it is trained to use for a variety of tasks.The techniques used for training and the choice of natural light visionas the primary sensor makes the methodology immediately applicable totasks such as trash collection or fruit picking. For example, the robotis readily trained to perform a ball finding task which involvesavoiding obstacles and aligning with tennis balls. The robot is able tomove at speeds up to 0.8 ms^-1 while performing this task,and has never had a collision in the trained environment. It can processvideo and update the actuators at 11 Hz using a single $20microprocessor to perform all computation. Further results are shown toevaluate the system for generalization across unseen domains, faulttolerance and dynamic environments.