Artificial fishes: physics, locomotion, perception, behavior
SIGGRAPH '94 Proceedings of the 21st annual conference on Computer graphics and interactive techniques
Multi-level direction of autonomous creatures for real-time virtual environments
SIGGRAPH '95 Proceedings of the 22nd annual conference on Computer graphics and interactive techniques
Improv: a system for scripting interactive actors in virtual worlds
SIGGRAPH '96 Proceedings of the 23rd annual conference on Computer graphics and interactive techniques
Go with the flow: synthetic vision for autonomous animated creatures
AGENTS '97 Proceedings of the first international conference on Autonomous agents
Retargetting motion to new characters
Proceedings of the 25th annual conference on Computer graphics and interactive techniques
The visual analysis of human movement: a survey
Computer Vision and Image Understanding
Animation control for real-time virtual humans
Communications of the ACM
Nudge nudge wink wink: elements of face-to-face conversation for embodied conversational agents
Embodied conversational agents
Object persistence for synthetic creatures
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 3
Proceedings of the 2002 ACM SIGGRAPH/Eurographics symposium on Computer animation
Integrated learning for interactive synthetic characters
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Interactive motion generation from examples
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Interactive control of avatars animated with human motion data
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Automated Derivation of Primitives for Movement Classification
Autonomous Robots
Snap-together motion: assembling run-time animations
I3D '03 Proceedings of the 2003 symposium on Interactive 3D graphics
Vision-Based Gesture Recognition: A Review
GW '99 Proceedings of the International Gesture Workshop on Gesture-Based Communication in Human-Computer Interaction
Representing and Parameterizing Agent Behaviors
CA '02 Proceedings of the Computer Animation
FootSee: an interactive animation system
Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation
Realtime Online Adaptive Gesture Recognition
ICPR '00 Proceedings of the International Conference on Pattern Recognition - Volume 1
Old tricks, new dogs: ethology and interactive creatures
Old tricks, new dogs: ethology and interactive creatures
Building parameterized action representations from observation
Building parameterized action representations from observation
Segmenting motion capture data into distinct behaviors
GI '04 Proceedings of the 2004 Graphics Interface Conference
Proceedings of the ACM symposium on Virtual reality software and technology
A Probabilistic Model of Motor Resonance for Embodied Gesture Perception
IVA '09 Proceedings of the 9th International Conference on Intelligent Virtual Agents
Imitation learning and response facilitation in embodied agents
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
Haptic interaction system for co-evolution with reactive virtual human
Edutainment'06 Proceedings of the First international conference on Technologies for E-Learning and Digital Entertainment
Hi-index | 0.00 |
The processes and representations used to generate the behavior of expressive virtual characters are a valuable and largely untapped resource for helping those characters make sense of the world around them. In this paper, we present Max T. Mouse, an anthropomorphic animated mouse character who uses his own motor and behavior representations to interpret the behaviors he sees his friend Morris Mouse performing. Specifically, by using his own motor and action systems as models for the behavioral capabilities of others (a process known as Simulation Theory in the cognitive literature), Max can begin to identify simple goals and motivations for Morris's behavior, an important step towards developing socially intelligent animated characters. Additionally, Max uses a novel motion graph-based movement recognition process in order to accurately parse and imitate Morris's movements and behaviors in real-time and without prior examples, even when provided with limited synthetic visual input. Key contributions of this paper include demonstrating that using the same mechanisms for movement and behavior perception and production allows for an elegant conservation of representation, and that the innate structure of motion graphs can be used to facilitate both movement parsing and movement recognition.