Detection of text on road signs from video

  • Authors:
  • Wen Wu;Xilin Chen;Jie Yang

  • Affiliations:
  • Sch. of Comput. Sci., Carnegie Mellon Univ., Pittsburgh, PA, USA;-;-

  • Venue:
  • IEEE Transactions on Intelligent Transportation Systems
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

A fast and robust framework for incrementally detecting text on road signs from video is presented in this paper. This new framework makes two main contributions. 1) The framework applies a divide-and-conquer strategy to decompose the original task into two subtasks, that is, the localization of road signs and the detection of text on the signs. The algorithms for the two subtasks are naturally incorporated into a unified framework through a feature-based tracking algorithm. 2) The framework provides a novel way to detect text from video by integrating two-dimensional (2-D) image features in each video frame (e.g., color, edges, texture) with the three-dimensional (3-D) geometric structure information of objects extracted from video sequence (such as the vertical plane property of road signs). The feasibility of the proposed framework has been evaluated using 22 video sequences captured from a moving vehicle. This new framework gives an overall text detection rate of 88.9% and a false hit rate of 9.2%. It can easily be applied to other tasks of text detection from video and potentially be embedded in a driver assistance system.