Performance of optical flow techniques
International Journal of Computer Vision
Analyzing Facial Expressions for Virtual Conferencing
IEEE Computer Graphics and Applications
CVPR '96 Proceedings of the 1996 Conference on Computer Vision and Pattern Recognition (CVPR '96)
Elliptical Head Tracking Using Intensity Gradients and Color Histograms
CVPR '98 Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Deformable Model-Based Face Shape and Motion Estimation
FG '96 Proceedings of the 2nd International Conference on Automatic Face and Gesture Recognition (FG '96)
Model-Based Face Tracking for View-Independent Facial Expression Recognition
FGR '02 Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition
Model-Based Head Pose Tracking With Stereovision
FGR '02 Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition
Shape recognition with application to medical imaging
Shape recognition with application to medical imaging
3D Tracking = Classification + Interpolation
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
An iterative image registration technique with an application to stereo vision
IJCAI'81 Proceedings of the 7th international joint conference on Artificial intelligence - Volume 2
Improved object segmentation based on 2D/3D images
SPPRA '08 Proceedings of the Fifth IASTED International Conference on Signal Processing, Pattern Recognition and Applications
A combined approach for estimating patchlets from PMD depth images and stereo intensity images
Proceedings of the 29th DAGM conference on Pattern recognition
Neurocomputing
Hi-index | 0.00 |
This paper describes a head-tracking algorithm that is based on recognition and correlation-based weighted interpolation. The input is a sequence of 3D depth images generated by a novel time-of-flight depth sensor. These are processed to segment the background and foreground, and the latter is used as the input to the head tracking algorithm, which is composed of three major modules: First, a depth signature is created out of the depth images. Next, the signature is compared against signatures that are collected in a training set of depth images. Finally, a correlation metric is calculated between most possible signature hits. The head location is calculated by interpolating among stored depth values, using the correlation metrics as the weights. This combination of depth sensing and recognition-based head tracking provides more than 90 percent success. Even if the track is temporarily lost, it is easily recovered when a good match is obtained from the training set. The use of depth images and recognition-based head tracking achieves robust real-time tracking results under extreme conditions such as 180-degree rotation, temporary occlusions, and complex backgrounds.