Pursuing atomic video words by information projection

  • Authors:
  • Youdong Zhao;Haifeng Gong;Yunde Jia

  • Affiliations:
  • School of Computer Science, Beijing Institute of Technology, Beijing, China;GRASP Lab., University of Pennsylvania, Philadelphia, PA;School of Computer Science, Beijing Institute of Technology, Beijing, China

  • Venue:
  • ACCV'10 Proceedings of the 10th Asian conference on Computer vision - Volume Part II
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we study mathematical models of atomic visual patterns from natural videos and establish a generative visual vocabulary for video representation. Empirically, we employ small video patches (e.g., 15×15×5, called video "bricks") in natural videos as basic analysis unit. There are a variety of brick subspaces (or atomic video words) of varying dimensions in the high dimensional brick space. The structures of the words are characterized by both appearance and motion dynamics. Here, we categorize the words into two pure types: structural video words (SVWs) and textural video words (TVWs). A common generative model is introduced to model these two type video words in a unified form. The representation power of a word is measured by its information gain, based on which words are pursued one by one via a novel pursuit algorithm, and finally a holistic video vocabulary is built up. Experimental results show the potential power of our framework for video representation.