Text locating in scene images for reading and navigation aids for visually impaired persons

  • Authors:
  • Chucai Yi

  • Affiliations:
  • City University of New York, New York, NY, USA

  • Venue:
  • Proceedings of the 12th international ACM SIGACCESS conference on Computers and accessibility
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Many reading assistants and navigation systems have been designed specifically for people who are blind or visually impaired, but text locating in scene image with complex background has not yet been successfully addressed. In this paper, we propose a novel method to locate scene text by combining color uniformity and high edge density together. We perform structural analysis of text strings which contain several characters in alignment. First, we calculate the edge image and then repaint the corresponding edge pixels in the original image by using a non-dominant color. Second, color reduction is performed by color histogram and K-means algorithms to segment the repainted image into color layers. Third, we perform edge detection and label the boundaries of both text characters and unexpected noises in each color layer. Each centroid is assigned a degree which is the number of overlap in the same position among color layers. Fourth, text line fitting among centroids with high degree is performed to cascade the character boundaries which belong to the same text string. The detected text string is presented by a rectangle region covering all character boundaries in its text line. Experimental results demonstrate that our algorithm is able to locate text strings with arbitrary orientations. The performance of our algorithm is comparable with the state-of-art algorithms.