A new approach for overlay text detection and extraction from complex video scene

  • Authors:
  • Wonjun Kim;Changick Kim

  • Affiliations:
  • Department of Electronic Engineering, Information and Communications University, Daejeon, Korea;Department of Electronic Engineering, Information and Communications University, Daejeon, Korea

  • Venue:
  • IEEE Transactions on Image Processing
  • Year:
  • 2009

Quantified Score

Hi-index 0.02

Visualization

Abstract

Overlay text brings important semantic clues in video content analysis such as video information retrieval and summarization, since the content of the scene or the editor's intention can be well represented by using inserted text. Most of the previous approaches to extracting overlay text from videos are based on low-level features, such as edge, color, and texture information. However, existing methods experience difficulties in handling texts with various contrasts or inserted in a complex background. In this paper, we propose a novel framework to detect and extract the overlay text from the video scene. Based on our observation that there exist transient colors between inserted text and its adjacent background, a transition map is first generated. Then candidate regions are extracted by a reshaping method and the overlay text regions are determined based on the occurrence of overlay text in each candidate. The detected overlay text regions are localized accurately using the projection of overlay text pixels in the transition map and the text extraction is finally conducted. The proposed method is robust to different character size, position, contrast, and color. It is also language independent. Overlay text region update between frames is also employed to reduce the processing time. Experiments are performed on diverse videos to confirm the efficiency of the proposed method.