Efficient Region Tracking With Parametric Models of Geometry and Illumination
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hyperplane Approximation for Template Matching
IEEE Transactions on Pattern Analysis and Machine Intelligence
Face Recognition Using Active Appearance Models
ECCV '98 Proceedings of the 5th European Conference on Computer Vision-Volume II - Volume II
ECCV '98 Proceedings of the 5th European Conference on Computer Vision-Volume II - Volume II
Lucas-Kanade 20 Years On: A Unifying Framework
International Journal of Computer Vision
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
Adaptive active appearance model with incremental learning
Pattern Recognition Letters
Video-based face model fitting using Adaptive Active Appearance Model
Image and Vision Computing
Tongue tracking in ultrasound images with active appearance models
ICIP'09 Proceedings of the 16th IEEE international conference on Image processing
Multi-view active appearance models for the X-ray based analysis of avian bipedal locomotion
DAGM'11 Proceedings of the 33rd international conference on Pattern recognition
Hi-index | 0.00 |
X-ray videography is one of the most important techniques for the locomotion analysis of animals in biology, motion science and robotics. Unfortunately, the evaluation of vast amounts of acquired data is a tedious and time-consuming task. Until today, the anatomical landmarks of interest have to be located manually in hundreds of images for each image sequence. Therefore, an automatization of this task is highly desirable. The main difficulties for the automated tracking of these landmarks are the numerous occlusions due to the movement of the animal and the low contrast in the x-ray images. For this reason, standard tracking approaches fail in this setting. To overcome this limitation, we analyze the application of Active Appearance Models for this task. Based on real data, we show that these models are capable of effectively dealing with occurring occlusions and low contrast and can provide sound tracking results.