Pipeline-Architecture Based Real-Time Active-Vision for Human-Action Recognition

  • Authors:
  • Matthew Mackay;Robert G. Fenton;Beno Benhabib

  • Affiliations:
  • Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, Canada M5S 3G8;Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, Canada M5S 3G8;Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, Canada M5S 3G8

  • Venue:
  • Journal of Intelligent and Robotic Systems
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a generic framework for on-line reconfiguration of a multi-camera active-vision system for time-varying-geometry object/subject action recognition. The proposed methodology utilizes customizable pipeline architecture to select optimal camera poses in real time. Subject visibility is optimized via a depth-limited search algorithm. All stages are developed with real-time operation as the central focus. A human action-sensing implementation example demonstrates viability. Controlled experiments, first with a human analogue and, subsequently, with a real human, illustrate the workings of the proposed framework. A tangible increase in action-recognition success rate over other strategies, particularly those with static cameras, is noteworthy. The proposed framework is also shown to operate in real-time. Further experiments examine the effect of scaling the number of obstacles and cameras, sensing-system mobility, and library actions on real-time performance.