BEAT: the Behavior Expression Animation Toolkit
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
Natural head motion synthesis driven by acoustic prosodic features: Virtual Humans and Social Agents
Computer Animation and Virtual Worlds - CASA 2005
Handbook of Mathematical Functions, With Formulas, Graphs, and Mathematical Tables,
Handbook of Mathematical Functions, With Formulas, Graphs, and Mathematical Tables,
Speech driven head motion synthesis based on a trajectory model
ACM SIGGRAPH 2007 posters
[HUGE]: universal architecture for statistically based HUman GEsturing
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
Multimodal behavior realization for embodied conversational agents
Multimedia Tools and Applications
COST'09 Proceedings of the Second international conference on Development of Multimodal Interfaces: active Listening and Synchrony
Hi-index | 0.00 |
Autonomous Speaker Agent (ASA) is a graphically embodied animated agent capable of reading plain English text and rendering it in a form of speech, accompanied by appropriate, natural-looking facial gestures [1]. This paper is focused on improving ASA's head movement trajectories in order to achieve facial gestures that look as natural as possible. Based on the gathered data we proposed mathematical functions that, using two input parameters (maximum amplitude and duration of the gesture) generate natural-looking head motion trajectory. Proposed functions were implemented in our existing ASA platform and we compared them with our previous head movement models. Our results were shown to a larger number of people. The audience noticed that results showed improvement in head motion and didn't detect any patterns which would suggest that animation was done with predefined motion trajectories.