A Visual Attention Based Approach to Text Extraction

  • Authors:
  • Qiaoyu Sun;Yue Lu;Shiliang Sun

  • Affiliations:
  • -;-;-

  • Venue:
  • ICPR '10 Proceedings of the 2010 20th International Conference on Pattern Recognition
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

A visual attention based approach is proposed to extract texts from complicated background in camera-based images. First, it applies the simplified visual attention model to highlight the region of interest (ROI) in an input image and to yield a map, named the VA map, consisting of the ROIs. Second, an edge map of image containing the edge information of four directions is obtained by Sobel operators. Character areas are detected by connected component analysis and merged into candidate text regions. Finally, the VA map is employed to confirm the candidate text regions. The experimental results demonstrate that the proposed method can effectively extract text information and locate text regions contained in camera-based images. It is robust not only for font, size, color, language, space, alignment and complexity of background, but also for perspective distortion and skewed texts embedded in images.