Towards automatic analysis of social interaction patterns in a nursing home environment from video
Proceedings of the 6th ACM SIGMM international workshop on Multimedia information retrieval
Multimodal detection of human interaction events in a nursing home environment
Proceedings of the 6th international conference on Multimodal interfaces
Evaluation of an on-line adaptive gesture interface with command prediction
GI '05 Proceedings of Graphics Interface 2005
QuickFusion: multimodal fusion without time thresholds
MMUI '05 Proceedings of the 2005 NICTA-HCSNet Multimodal User Interaction Workshop - Volume 57
Detecting social interactions of the elderly in a nursing home environment
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
Similarity-based analysis for large networks of ultra-low resolution sensors
Pattern Recognition
Follow the beat? understanding conducting gestures from video
ISVC'07 Proceedings of the 3rd international conference on Advances in visual computing - Volume Part I
Toward scalable activity recognition for sensor networks
LoCA'06 Proceedings of the Second international conference on Location- and Context-Awareness
Real-time adaptive hand motion recognition using a sparse bayesian classifier
ICCV'05 Proceedings of the 2005 international conference on Computer Vision in Human-Computer Interaction
Hi-index | 0.00 |
We introduce an online adaptive algorithm for learning gesture models. By learning gesture models in an online fashion, the gesture recognition process is made more robust, and the need to train on a large training ensemble is obviated. Hidden Markov models are used to represent the spatial and temporal structure of the gesture. The usual output probability distributions --- typically representing appearance --- are trained at runtime exploiting the temporal structure (Markov model) that is either trained off-line or is explicitly hand-coded. In the early stages of runtime adaptation, contextual information derived from the application is used to bias the expectation as to which Markov state the system is in at any given time. We describe the {\em Watch and Learn} system, a computer vision system which is able to learn simple gestures online for interactive control.