Smart Cameras as Embedded Systems
Computer
Imitation as a first step to social learning in synthetic characters: a graph-based approach
Proceedings of the 2005 ACM SIGGRAPH/Eurographics symposium on Computer animation
Using dynamic time warping for online temporal fusion in multisensor systems
Information Fusion
Gesture Recognition Based on Elastic Deformation Energies
Gesture-Based Human-Computer Interaction and Simulation
Recognition of human action for game system
AIS'04 Proceedings of the 13th international conference on AI, Simulation, and Planning in High Autonomy Systems
Personal and Ubiquitous Computing
Beyond recognition: using gesture variation for continuous interaction
CHI '13 Extended Abstracts on Human Factors in Computing Systems
A tutorial on human activity recognition using body-worn inertial sensors
ACM Computing Surveys (CSUR)
Hi-index | 0.00 |
We introduce an online adaptive algorithm for learning gesture models. By learning gesture models in an online fashion, the gesture recognition process is made more robust, and the need to train on a large training ensemble is obviated. Hidden Markov models are used to represent the spatial and temporal structure of the gesture. The usual output probability distributions --- typically representing appearance --- are trained at runtime exploiting the temporal structure (Markov model) that is either trained off-line or is explicitly hand-coded. In the early stages of runtime adaptation, contextual information derived from the application is used to bias the expectation as to which Markov state the system is in at any given time. We describe the Watch and Learn system, a computer vision system that is able to learn simple gestures online for interactive control.