eyeDog: an assistive-guide robot for the visually impaired

  • Authors:
  • Georgios Galatas;Christopher McMurrough;Gian Luca Mariottini;Fillia Makedon

  • Affiliations:
  • University of Texas at Arlington, Arlington, TX;University of Texas at Arlington, Arlington, TX;University of Texas at Arlington, Arlington, TX;University of Texas at Arlington, Arlington, TX

  • Venue:
  • Proceedings of the 4th International Conference on PErvasive Technologies Related to Assistive Environments
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Visually impaired people can navigate unfamiliar areas by relying on the assistance of other people, canes, or specially trained guide dogs. Guide dogs provide the impaired person with the highest degree of mobility and independence, but require expensive training and selective breeding. In this paper we describe the design and development of a prototype assistive-guide robot (eyeDog) that provides the visually impaired person with autonomous vision-based navigation and laser-based obstacle avoidance capabilities. This kind of assistive-guide robot has several advantages, such as robust performance and reduced cost and maintenance. The main components of our system are the Create robotic platform (from iRobot), a net-book, an on-board USB webcam and a LIDAR unit. The camera is used as the primary exteroceptive sensor for the navigation task; the frames captured by the camera are processed in order to robustly estimate the position of the vanishing point associated to the road/corridor where the eyeDog needs to move. The controller will then steer the robot until the vanishing point and the image center coincide. This condition guarantees the robot to move parallel to the direction of the road/corridor. While moving, the robot uses the LIDAR for obstacle avoidance.