Artificial Intelligence
On the visual expectations of moving objects
ECAI '92 Proceedings of the 10th European conference on Artificial intelligence
Interpreting a dynamic and uncertain world: task-based control
Artificial Intelligence
Learning variable-length Markov models of behavior
Computer Vision and Image Understanding - Modeling people toward vision-based underatanding of a person's shape, appearance, and movement
Transformation-Invariant Clustering Using the EM Algorithm
IEEE Transactions on Pattern Analysis and Machine Intelligence
ASL Recognition Based on a Coupling Between HMMs and 3D Motion Analysis
ICCV '98 Proceedings of the Sixth International Conference on Computer Vision
Joint Spatial and Temporal Structure Learning for Task based Control
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 2 - Volume 02
Time-delay neural networks: representation and induction of finite-state machines
IEEE Transactions on Neural Networks
A task-driven intelligent workspace system to provide guidance feedback
Computer Vision and Image Understanding
Hi-index | 0.00 |
We present an extension for variable length Markov models (VLMMs) to allow for modelling of continuous input data and show that the generative properties of these VLMMs are a powerful tool for dealing with real world tracking issues. We explore methods for addressing the temporal correspondence problem in the context of a practical hand tracker, which is essential to support expectation in task-based control using these behavioural models. The hand tracker forms a part of a larger multi-component distributed system, providing 3-D hand position data to a gesture recogniser client. We show how the performance of such a hand tracker can be improved by using feedback from the gesture recogniser client. In particular, feedback based on the generative extrapolation of the recogniser's internal models is shown to help the tracker deal with mid-term occlusion. We also show that VLMMs can be used as a means to inform the prior in an expectation maximisation (EM) process used for joint spatial and temporal learning of image features.