Towards a real-time system for finding and reading signs for visually impaired users

  • Authors:
  • Huiying Shen;James M. Coughlan

  • Affiliations:
  • The Smith-Kettlewell Eye Research Institute, San Francisco, CA;The Smith-Kettlewell Eye Research Institute, San Francisco, CA

  • Venue:
  • ICCHP'12 Proceedings of the 13th international conference on Computers Helping People with Special Needs - Volume Part II
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Printed text is a ubiquitous form of information that is inaccessible to many blind and visually impaired people unless it is represented in a non-visual form such as Braille. OCR (optical character recognition) systems have been used by blind and visually impaired persons for some time to read documents such as books and bills; recently this technology has been packaged in a portable device, such as the smartphone-based kReader Mobile (from K---NFB Reading Technology, Inc.), which allows the user to photograph a document such as a restaurant menu and hear the text read aloud. However, while this kind of OCR system is useful for reading documents at close range (which may still require the user to take a few photographs, waiting a few seconds each time to hear the results, to take one that is correctly centered), it is not intended for signs. (Indeed, the KNFB manual, see knfbreader.com/upgrades_mobile.php , lists "posted signs such as signs on transit vehicles and signs in shop windows" in the "What the Reader Cannot Do" subsection.) Signs provide valuable location-specific information that is useful for wayfinding, but are usually viewed from a distance and are difficult or impossible to find without adequate vision and rapid feedback. We describe a prototype smartphone system that finds printed text in cluttered scenes, segments out the text from video images acquired by the smartphone for processing by OCR, and reads aloud the text read by OCR using TTS (text-to-speech). Our system detects and reads aloud text from video images, and thereby provides real-time feedback (in contrast with systems such as the kReader Mobile) that helps the user find text with minimal prior knowledge about its location. We have designed a novel audio-tactile user interface that helps the user hold the smartphone level and assists him/her with locating any text of interest and approaching it, if necessary, for a clearer image. Preliminary experiments with two blind users demonstrate the feasibility of the approach, which represents the first real-time sign reading system we are aware of that has been expressly designed for blind and visually impaired users.