Assistive text reading from complex background for blind persons

  • Authors:
  • Chucai Yi;Yingli Tian

  • Affiliations:
  • Media Lab, Dept. of Electrical Engeering, The City College of New York, City Univ. of New York, New York, NY, USA and Dept. of Computer Science, The Graduate Center, City Univ. of New York, New Yo ...;Media Lab, Dept. of Electrical Engeering, The City College of New York, City Univ. of New York, New York, NY, USA and Dept. of Computer Science, The Graduate Center, City Univ. of New York, New Yo ...

  • Venue:
  • CBDAR'11 Proceedings of the 4th international conference on Camera-Based Document Analysis and Recognition
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

In the paper, we propose a camera-based assistive system for visually impaired or blind persons to read text from signage and objects that are held in the hand. The system is able to read text from complex backgrounds and then communicate this information aurally. To localize text regions in images with complex backgrounds, we design a novel text localization algorithm by learning gradient features of stroke orientations and distributions of edge pixels in an Adaboost model. Text characters in the localized regions are recognized by off-the-shelf optical character recognition (OCR) software and transformed into speech outputs. The performance of the proposed system is evaluated on ICDAR 2003 Robust Reading Dataset. Experimental results demonstrate that our algorithm outperforms previous algorithms on some measures. Our prototype system was further evaluated on a dataset collected by 10 blind persons, with the system effectively reading text from complex backgrounds.