Implementation of a Modular Real-Time Feature-Based Architecture Applied to Visual Face Tracking

  • Authors:
  • Benjamin Castaneda;Yuriy Luzanov;Juan C. Cockburn

  • Affiliations:
  • Rochester Institute of Technology, New York;Rochester Institute of Technology, New York;Rochester Institute of Technology, New York

  • Venue:
  • ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 4 - Volume 04
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a modular real-time feature-based visual tracking architecture where each feature of an object is tracked by one module. A data fusion stage collects the information from various modules exploiting the relationship among features to achieve robust detection and visual tracking. This architecture takes advantage of the temporal and spatial information available in a video stream. Its effectiveness is demonstrated in a face tracking system that uses eyes and lips as features. In the architecture implementation, each module has a pre-processing stage that reduces the number of image regions that are candidates for eyes and lips. Support Vector Machines are then used in the classification process, whereas a combination of Kalman filters and template matching is used for tracking. The geometric relation between features is used in the data fusion stage to combine the information from different modules to improve tracking.