The media equation: how people treat computers, television, and new media like real people and places
Affective computing
The EMOTE model for effort and shape
Proceedings of the 27th annual conference on Computer graphics and interactive techniques
Emotion and personality in a conversational agent
Embodied conversational agents
FacEMOTE: qualitative parametric modifiers for facial animations
Proceedings of the 2002 ACM SIGGRAPH/Eurographics symposium on Computer animation
SenToy: a tangible interface to control the emotions of a synthetic character
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
International Journal of Human-Computer Studies - Application of affective computing in humanComputer interaction
Using music to interact with a virtual character
NIME '05 Proceedings of the 2005 conference on New interfaces for musical expression
Direction of attention perception for conversation initiation in virtual environments
Lecture Notes in Computer Science
Subject interfaces: measuring bodily activation during an emotional experience of music
GW'05 Proceedings of the 6th international conference on Gesture in Human-Computer Interaction and Simulation
From acoustic cues to an expressive agent
GW'05 Proceedings of the 6th international conference on Gesture in Human-Computer Interaction and Simulation
Emotion analysis in man-machine interaction systems
MLMI'04 Proceedings of the First international conference on Machine Learning for Multimodal Interaction
Multimodal sensing, interpretation and copying of movements by a virtual agent
PIT'06 Proceedings of the 2006 international tutorial and research conference on Perception and Interactive Technologies
Gesture-Based Human-Computer Interaction and Simulation
Multimodal feedback in first encounter interactions
HCI'13 Proceedings of the 15th international conference on Human-Computer Interaction: interaction modalities and techniques - Volume Part IV
Hi-index | 0.00 |
In this paper we present an agent that can analyse certain human full-body movements in order to respond in an expressive manner with copying behaviour. Our work focuses on the analysis of human full-body movement for animating a virtual agent, called Greta, able to perceive and interpret users' expressivity and to respond properly. Our system takes in input video data related to a dancer moving in the space. Analysis of video data and automatic extraction of motion cues is done in EyesWeb. We consider the amplitude and speed of movement. Then, to generate the animation for our agent, we need to map the motion cues on the corresponding expressivity parameters of the agent. We also present a behaviour markup language for virtual agents to define the values of expressivity parameters on gestures.