Boosted Exemplar Learning for Action Recognition and Annotation

  • Authors:
  • Tianzhu Zhang; Jing Liu; Si Liu; Changsheng Xu; Hanqing Lu

  • Affiliations:
  • Nat. Lab. of Pattern Recognition, Chinese Acad. of Sci., Beijing, China;-;-;-;-

  • Venue:
  • IEEE Transactions on Circuits and Systems for Video Technology
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Human action recognition and annotation is an active research topic in computer vision. How to model various actions, varying with time resolution, visual appearance, and others, is a challenging task. In this paper, we propose a boosted exemplar learning (BEL) approach to model various actions in a weakly supervised manner, i.e., only action bag-level labels are provided but action instance level ones are not. The proposed BEL method can be summarized as three steps. First, for each action category, amount of class-specific candidate exemplars are learned through an optimization formulation considering their discrimination and co-occurrence. Second, each action bag is described as a set of similarities between its instances and candidate exemplars. Instead of simply using a heuristic distance measure, the similarities are decided by the exemplar-based classifiers through the multiple instance learning, in which a positive (or negative) video or image set is deemed as a positive (or negative) action bag and those frames similar to the given exemplar in Euclidean Space as action instances. Third, we formulate the selection of the most discriminative exemplars into a boosted feature selection framework and simultaneously obtain an action bag-based detector. Experimental results on two publicly available datasets: the KTH dataset and Weizmann dataset, demonstrate the validity and effectiveness of the proposed approach for action recognition. We also apply BEL to learn representations of actions by using images collected from the Web and use this knowledge to automatically annotate action in YouTube videos. Results are very impressive, which proves that the proposed algorithm is also practical in unconstraint environments.