Video retrieval of human interactions using model-based motion tracking and multi-layer finite state automata

  • Authors:
  • Sangho Park;Jihun Park;Jake K. Aggarwal

  • Affiliations:
  • Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX;Department of Computer Engineering, Hongik University, Seoul, Korea;Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX

  • Venue:
  • CIVR'03 Proceedings of the 2nd international conference on Image and video retrieval
  • Year:
  • 2003

Quantified Score

Hi-index 0.01

Visualization

Abstract

Recognition of human interactions in a video is useful for video annotation, automated surveillance, and content-based video retrieval. This paper presents a model-based approach to motion tracking and recognition of human interactions using multi-layer finite state automata (FA). The system is used for widely-available, static-background monocular surveillance videos. A three-dimensional human body model is built using a sphere and cylinders and is projected on a two-dimensional image plane to fit the foreground image silhouette. We convert the human motion tracking problem into a parameter optimization problem without the need to compute inverse kinematics. A cost functional is used to estimate the degree of the overlap between the foreground input image silhouette and a projected three-dimensional body model silhouette. Motion data obtained from the tracker is analyzed in terms of feet, torso, and hands by a behavior recognition system. The recognition model represents human behavior as a sequence of states that register the configuration of individual body parts in space and time. In order to overcome the exponential growth of the number of states that usually occurs in single-level FA, we propose a multi-layer FA that abstracts states and events from motion data at multiple levels: low-level FA analyzes body parts only, and high-level FA analyzes the human interaction. Motion tracking results from video sequences are presented. Our recognition framework successfully recognizes various human interactions such as approaching, departing, pushing, pointing, and handshaking.