IEEE Computer Graphics and Applications - Special issue on computer-aided geometric design
IEEE Transactions on Pattern Analysis and Machine Intelligence
Marker Tracking and HMD Calibration for a Video-Based Augmented Reality Conferencing System
IWAR '99 Proceedings of the 2nd IEEE and ACM International Workshop on Augmented Reality
Using the Active Appearance Algorithm for Face and Facial Feature Tracking
RATFG-RTS '01 Proceedings of the IEEE ICCV Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems (RATFG-RTS'01)
Face Recognition Based on Fitting a 3D Morphable Model
IEEE Transactions on Pattern Analysis and Machine Intelligence
Emerging Topics in Computer Vision
Emerging Topics in Computer Vision
Stable Real-Time 3D Tracking Using Online and Offline Information
IEEE Transactions on Pattern Analysis and Machine Intelligence
Real-time combined 2D+3D active appearance models
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
Person-independent monocular tracking of face and facial actions with multilinear models
AMFG'07 Proceedings of the 3rd international conference on Analysis and modeling of faces and gestures
Hi-index | 0.00 |
One main challenge in Augmented Reality (AR) applications is to keep track of video objects with their movement, orientation, size, and position accurately. This poses a challenging task to recover non-rigid shape and global pose in real-time AR applications. This paper proposes a novel two-stage scheme for online non-rigid shape recovery toward AR applications using Active Appearance Models (AAMs). First, we construct 3D shape models from AAMs offline, which do not involve processing of the 3D scan data. Based on the computed 3D shape models, we propose an efficient online algorithm to estimate both 3D pose and non-rigid shape parameters via local bundle adjustment for building up point correspondences. Our approach, without manual intervention, can recover the 3D non-rigid shape effectively from either real-time video sequences or single image. The recovered 3D pose parameters can be used for AR registrations. Furthermore, the facial feature can be tracked simultaneously, which is critical for many face related applications. We evaluate our algorithms on several video sequences. Promising experimental results demonstrate our proposed scheme is effective and significant for real-time AR applications.