Hierarchical space-time model enabling efficient search for human actions

  • Authors:
  • Huazhong Ning;Tony X. Han;Dirk B. Walther;Ming Liu;Thomas S. Huang

  • Affiliations:
  • Electrical and Computer Engineering Department and Beckman Institute for Advanced Science and Technology, University of Illinois, Urbana, IL;Electrical and Computer Engineering Department, University of Missouri-Columbia, Columbia, MO;Beckman Institute for Advanced Science and Technology, University of Illinois, Urbana, IL;Electrical and Computer Engineering Department and Beckman Institute for Advanced Science and Technology, University of Illinois, Urbana, IL;Electrical and Computer Engineering Department and Beckman Institute for Advanced Science and Technology, University of Illinois, Urbana, IL

  • Venue:
  • IEEE Transactions on Circuits and Systems for Video Technology
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose a five-layer hierarchical space-time model (HSTM) for representing and searching human actions in videos. From a features point of view, both invariance and selectivity are desirable characteristics, which seem to contradict each other. To make these characteristics coexist, we introduce a coarse-to-fine search and verification scheme for action searching, based on the HSTM model. Because going through layers of the hierarchy corresponds to progressively turning the knob between invariance and selectivity, this strategy enables search for human actions ranging from rapid movements of sports to subtle motions of facial expressions. The introduction of the Histogram of Gabor Orientations feature makes the searching for actions go smoothly across the hierarchical layers of the HSTM model. The efficient matching is achieved by applying integral histograms to compute the features in the top two layers. The HSTM model was tested on three selected challenging video sequences and on the KTH human action database. And it achieved improvement over other state-of-theart algorithms. These promising results validate that the HSTM model is both selective and robust for searching human actions.