Multi-level direction of autonomous creatures for real-time virtual environments
SIGGRAPH '95 Proceedings of the 22nd annual conference on Computer graphics and interactive techniques
Improv: a system for scripting interactive actors in virtual worlds
SIGGRAPH '96 Proceedings of the 23rd annual conference on Computer graphics and interactive techniques
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Action Reaction Learning: Automatic Visual Analysis and Synthesis of Interactive Behaviour
ICVS '99 Proceedings of the First International Conference on Computer Vision Systems
Motion synthesis from annotations
ACM SIGGRAPH 2003 Papers
Radial Basis Functions
Crowdbrush: interactive authoring of real-time crowd scenes
SCA '04 Proceedings of the 2004 ACM SIGGRAPH/Eurographics symposium on Computer animation
Toward gesture-based behavior authoring
CGI '05 Proceedings of the Computer Graphics International 2005
The Behavior Markup Language: Recent Developments and Challenges
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
Puppet Master: designing reactive character behavior by demonstration
Proceedings of the 2008 ACM SIGGRAPH/Eurographics Symposium on Computer Animation
IVA'11 Proceedings of the 10th international conference on Intelligent virtual agents
A framework for motion based bodily enaction with virtual characters
IVA'11 Proceedings of the 10th international conference on Intelligent virtual agents
Continuous interaction within the SAIBA framework
IVA'11 Proceedings of the 10th international conference on Intelligent virtual agents
Style by demonstration: teaching interactive movement style to robots
Proceedings of the 2012 ACM international conference on Intelligent User Interfaces
ICME '11 Proceedings of the 2011 IEEE International Conference on Multimedia and Expo
Hi-index | 0.00 |
We explore motion capture as a means for generating expressive bodily interaction between humans and virtual characters. Recorded interactions between humans are used as examples from which rules are formed that control reactions of a virtual character to human actions. The author of the rules selects segments considered important and features that best describe the desired interaction. These features are motion descriptors that can be calculated in real-time such as quantity of motion or distance between the interacting characters. The rules are authored as mappings from observed descriptors of a human to the desired descriptors of the responding virtual character. Our method enables a straightforward process of authoring continuous and natural interaction. It can be used in games and interactive animations to produce dramatic and emotional effects. Our approach requires less example motions than previous machine learning methods and enables manual editing of the produced interaction rules.