The EMOTE model for effort and shape
Proceedings of the 27th annual conference on Computer graphics and interactive techniques
Real-time inverse kinematics techniques for anthropomorphic limbs
Graphical Models and Image Processing
BEAT: the Behavior Expression Animation Toolkit
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
Real Time Responsive Animation with Personality
IEEE Transactions on Visualization and Computer Graphics
Design of a Virtual Human Presenter
IEEE Computer Graphics and Applications
Creating Interactive Virtual Humans: Some Assembly Required
IEEE Intelligent Systems
Beyond the beat: modelling intentions in a virtual conductor
Proceedings of the 2nd international conference on INtelligent TEchnologies for interactive enterTAINment
Mutually Coordinated Anticipatory Multimodal Interaction
Verbal and Nonverbal Features of Human-Human and Human-Machine Interaction
On the Parametrization of Clapping
Gesture-Based Human-Computer Interaction and Simulation
Software—Practice & Experience
Which way to turn?: guide orientation in virtual way finding
EmbodiedNLP '07 Proceedings of the Workshop on Embodied Language Processing
Industrial E-Commerce and Visualization of Products: 3D Rotation versus 2D Metamorphosis
Proceedings of the Symposium on Human Interface 2009 on Human Interface and the Management of Information. Information and Interaction. Part II: Held as part of HCI International 2009
Enhancing distributed corporate meetings with lightweight avatars
CHI '10 Extended Abstracts on Human Factors in Computing Systems
Avatars meet meetings: design issues in integrating avatars in distributed corporate meetings
Proceedings of the 16th ACM international conference on Supporting group work
Hi-index | 0.01 |
Multiparty-interaction technology is changing entertainment, education, and training. Deployed examples of such technology includeembodied agents and robots that act as a museum guide, a news presenter, a teacher, a receptionist, or someone trying to sell youinsurance, homes, or tickets. In all these cases, the embodied agent needs to explain and describe. This article describes the design ofa 3D virtual presenter that uses different output channels (including speech and animation of posture, pointing, and involuntarymovements) to present and explain. The behavior is scripted and synchronized with a 2D display containing associated text and regions(slides, drawings, and paintings) at which the presenter can point.This article is part of a special issue on interactiveentertainment.