Real-time interactions with virtual agents driven by human action identification
AGENTS '97 Proceedings of the first international conference on Autonomous agents
A real time anatomical converter for human motion capture
Proceedings of the Eurographics workshop on Computer animation and simulation '96
The ALIVE system: full-body interaction with autonomous agents
CA '95 Proceedings of the Computer Animation
The virtual human as a multimodal interface
AVI '00 Proceedings of the working conference on Advanced visual interfaces
Contextual Virtual Interaction as Part of Ubiquitous Game Design and Development
Personal and Ubiquitous Computing
Motion Control of Virtual Humans
IEEE Computer Graphics and Applications
Sharing Attractions on the Net with VPark
IEEE Computer Graphics and Applications
Populating the Virtual Worlds with Interactive Perceptive Virtual Actors
ECAL '99 Proceedings of the 5th European Conference on Advances in Artificial Life
Multimodal Gumdo Game: The Whole Body Interaction with an Intelligent Cyber Fencer
PCM '02 Proceedings of the Third IEEE Pacific Rim Conference on Multimedia: Advances in Multimedia Information Processing
Using virtual humans for multimodal communication in virtual reality and augmented reality
Multimodal interface for human-machine communication
ACM Transactions on Autonomous and Adaptive Systems (TAAS)
ACM Transactions on Applied Perception (TAP)
Motion Modeling: Can We Get Rid of Motion Capture?
Motion in Games
Comparison of human and machine recognition of everyday human actions
ICDHM'07 Proceedings of the 1st international conference on Digital human modeling
Interaction between real and virtual humans: playing checkers
EG VE'00 Proceedings of the 6th Eurographics conference on Virtual Environments
Hi-index | 0.00 |
Since their inception, interactive virtual environments have not been able to interpret users' gestures. Researchers have investigated a few tentative solutions, but most of them concern a specific set of body parts like hands, arms, or facial expressions. However, when placing a participant in a virtual world to interact with synthetic inhabitants, it would be more convenient and intuitive to use body-oriented actions. To achieve this, we developed a hierarchical model of human actions based on fine-grained primitives. An associated recognition algorithm identifies simultaneous actions on the fly