An overview of contest on semantic description of human activities (SDHA) 2010

  • Authors:
  • M. S. Ryoo;Chia-Chih Chen;J. K. Aggarwal;Amit Roy-Chowdhury

  • Affiliations:
  • Computer and Vision Research Center, the University of Texas at Austin and Robot/Cognition Research Department, ETRI, Korea;Computer and Vision Research Center, the University of Texas at Austin;Computer and Vision Research Center, the University of Texas at Austin;Dept. of EE, University of California, Riverside

  • Venue:
  • ICPR'10 Proceedings of the 20th International conference on Recognizing patterns in signals, speech, images, and videos
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper summarizes results of the 1st Contest on Semantic Description of Human Activities (SDHA), in conjunction with ICPR 2010. SDHA 2010 consists of three types of challenges, High-level Human Interaction Recognition Challenge, Aerial View Activity Classification Challenge, and Wide-Area Activity Search and Recognition Challenge. The challenges are designed to encourage participants to test existing methodologies and develop new approaches for complex human activity recognition scenarios in realistic environments. We introduce three new public datasets through these challenges, and discuss results of the stateof-the-art activity recognition systems designed and implemented by the contestants. A methodology using a spatio-temporal voting [19] successfully classified segmented videos in the UT-Interaction datasets, but had a difficulty correctly localizing activities from continuous videos. Both the method using local features [10] and the HMM based method [18] recognized actions from low-resolution videos (i.e. UT-Tower dataset) successfully. We compare their results in this paper.