Real-time animation and motion capture in Web human director (WHD)
VRML '00 Proceedings of the fifth symposium on Virtual reality modeling language (Web3D-VRML)
Integrating models of personality and emotions into lifelike characters
Affective interactions
A step toward irrationality: using emotion to change belief
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1
Believability through context using "knowledge in the world" to create intelligent characters
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1
Interactive control of avatars animated with human motion data
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
A multilayer personality model
Proceedings of the 2nd international symposium on Smart graphics
Interactive visual method for motion and model reuse
Proceedings of the 1st international conference on Computer graphics and interactive techniques in Australasia and South East Asia
The human-computer interaction handbook
AMOBA: A Database System for Annotating Captured Human Movements
CA '02 Proceedings of the Computer Animation
“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
SCCG '03 Proceedings of the 19th spring conference on Computer graphics
The Mobile Animator: Interactive Character Animation in Collaborative Virtual Environments
VR '04 Proceedings of the IEEE Virtual Reality 2004
Conducting a Virtual Orchestra
IEEE MultiMedia
Hi-index | 0.00 |
We aim to create a model of emotional reactive virtual humans. This model will help to define realistic behavior for virtual characters based on emotions and events in the Virtual Environment to which they react. A large set of pre-recorded animations will be used to obtain such model. We have defined a knowledge-based system to store animations of reflex movements taking into account personality and emotional state. Populating such a database is a complex task. In this paper we describe a multimodal authoring tool that provides a solution to this problem. Our multimodal tool makes use of motion capture equipment, a handheld device and a large projection screen.