Interpreting Face Images Using Active Appearance Models
FG '98 Proceedings of the 3rd. International Conference on Face & Gesture Recognition
Fast Stereo-Based Head Tracking for Interactive Environments
FGR '02 Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition
Robust Real-Time Face Detection
International Journal of Computer Vision
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
Tracking head pose and focus of attention with multiple far-field cameras
Proceedings of the 8th international conference on Multimodal interfaces
Proceedings of the 9th international conference on Multimodal interfaces
Real-time Visual Tracker by Stream Processing
Journal of Signal Processing Systems
Pose-invariant facial expression recognition using variable-intensity templates
ACCV'07 Proceedings of the 8th Asian conference on Computer vision - Volume Part I
De-noising by soft-thresholding
IEEE Transactions on Information Theory
Modeling focus of attention for meeting indexing based on multiple cues
IEEE Transactions on Neural Networks
HCII'11 Proceedings of the 1st international conference on Human interface and the management of information: interacting with information - Volume Part II
Hi-index | 0.00 |
This paper presents a novel face tracker and verifies its effectiveness for analyzing group meetings. In meeting scene analysis, face direction is an important clue for assessing the visual attention of meeting participants. The face tracker, called STCTracker (Sparse Template Condensation Tracker), estimates face position and pose by matching face templates in the framework of a particle filter. STCTracker is robust against large head rotation, up to ±60 degrees in the horizontal direction, with relatively small mean deviation error. Also, it can track multiple faces simultaneously in real-time by utilizing a modern GPU (Graphics Processing Unit), e.g. 6 faces at about 28 frames/second on a single PC. Also, it can automatically build 3-D face templates upon initialization of the tracker. This paper evaluates the tracking errors and verifies the effectiveness of STCTracker for meeting scene analysis, in terms of conversation structures, gaze directions, and the structure of cross-modal interactions involving head gestures and utterances. Experiments confirm that STCTracker can basically match the performance of from the user-unfriendly magnetic-sensor-based motion capture system.