Tracking a Person with 3-D Motion by Integrating Optical Flow and Depth

  • Authors:
  • R. Okada;Y. Shirai;J. Miura

  • Affiliations:
  • -;-;-

  • Venue:
  • FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
  • Year:
  • 2000

Quantified Score

Hi-index 0.03

Visualization

Abstract

This paper describes a method of tracking a person with 3-D translation and rotation by integrating optical flow and depth. The target region is first extracted based on the probability of each pixel belonging to the target person. The target state (3-D position, posture, motion) is estimated based on the shape and the position of the target region in addition to optical flow and depth. Multiple target states are maintained when the image measurements give rise to ambiguities about the target state. Experimental results with real image sequences show the effectiveness of our method.