Autonomous mobile robot that can read

  • Authors:
  • Dominic Létourneau;François Michaud;Jean-Marc Valin

  • Affiliations:
  • Research Laboratory on Mobile Robotics and Intelligent Systems (LABORIUS), Department of Electrical Engineering and Computer Engineering, University of Sherbrooke, Sherbrooke, Quebec, Canada;Research Laboratory on Mobile Robotics and Intelligent Systems (LABORIUS), Department of Electrical Engineering and Computer Engineering, University of Sherbrooke, Sherbrooke, Quebec, Canada;Research Laboratory on Mobile Robotics and Intelligent Systems (LABORIUS), Department of Electrical Engineering and Computer Engineering, University of Sherbrooke, Sherbrooke, Quebec, Canada

  • Venue:
  • EURASIP Journal on Applied Signal Processing
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

The ability to read would surely contribute to increased autonomy of mobile robots operating in the real world. The process seems fairly simple: the robot must be capable of acquiring an image of a message to read, extract the characters, and recognize them as symbols, characters, and words. Using an optical Character Recognition algorithm on a mobile robot however brings additional challenges: the robot has to control its position in the world and its pan-tilt-zoom camera to find textual messages to read, potentially having to compensate for its viewpoint of the message, and use the limited onboard processing capabilities to decode the message. The robot also has to deal with variations in lighting conditions. In this paper, we present our approach demonstrating that it is feasible for an autonomous mobile robot to read messages of specific colors and font in real-world conditions. We outline the constraints under which the approach works and present results obtained using a Pioneer 2 robot equipped with a Pentium 233MHz and a Sony EVI-D30 pan-tilt-zoom camera.