Real-time inverse kinematics techniques for anthropomorphic limbs
Graphical Models and Image Processing
Synthesis of complex dynamic character motion from simple animations
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Sparse on-line Gaussian processes
Neural Computation
Movement Phase in Signs and Co-Speech Gestures, and Their Transcriptions by Human Coders
Proceedings of the International Gesture Workshop on Gesture and Sign Language in Human-Computer Interaction
Motion synthesis from annotations
ACM SIGGRAPH 2003 Papers
Style-based inverse kinematics
ACM SIGGRAPH 2004 Papers
A physically-based motion retargeting filter
ACM Transactions on Graphics (TOG)
Variations in gesturing and speech by GESTYLE
International Journal of Human-Computer Studies - Special issue: Subtle expressivity for characters and robots
Synthesizing multimodal utterances for conversational agents: Research Articles
Computer Animation and Virtual Worlds
Geostatistical motion interpolation
ACM SIGGRAPH 2005 Papers
Expressive speech-driven facial animation
ACM Transactions on Graphics (TOG)
Animating blendshape faces by cross-mapping motion capture data
I3D '06 Proceedings of the 2006 symposium on Interactive 3D graphics and games
ACM SIGGRAPH 2006 Papers
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Expressive Facial Animation Synthesis by Learning Speech Coarticulation and Expression Spaces
IEEE Transactions on Visualization and Computer Graphics
IEEE Transactions on Pattern Analysis and Machine Intelligence
Constraint-based motion optimization using a statistical dynamic model
ACM SIGGRAPH 2007 papers
Providing signed content on the Internet by synthesized animation
ACM Transactions on Computer-Human Interaction (TOCHI)
Evaluating American Sign Language generation through the participation of native ASL signers
Proceedings of the 9th international ACM SIGACCESS conference on Computers and accessibility
Synthesis and evaluation of linear motion transitions
ACM Transactions on Graphics (TOG)
Gesture modeling and animation based on a probabilistic re-creation of speaker style
ACM Transactions on Graphics (TOG)
A knowledge-based sign synthesis architecture
Universal Access in the Information Society
Facial animation by optimized blendshapes from motion capture data
Computer Animation and Virtual Worlds - CASA'2008 Special Issue
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
The Behavior Markup Language: Recent Developments and Challenges
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
Generalizing motion edits with Gaussian processes
ACM Transactions on Graphics (TOG)
A Linguistically Motivated Model for Speed and Pausing in Animations of American Sign Language
ACM Transactions on Accessible Computing (TACCESS)
Natural Eye Motion Synthesis by Modeling Gaze-Head Coupling
VR '09 Proceedings of the 2009 IEEE Virtual Reality Conference
A Combined Semantic and Motion Capture Database for Real-Time Sign Language Synthesis
IVA '09 Proceedings of the 9th International Conference on Intelligent Virtual Agents
REAL-TIME ANIMATION OF INTERACTIVE AGENTS: SPECIFICATION AND REALIZATION
Applied Artificial Intelligence - Intelligent Virtual Agents
A virtual interpreter for the Italian sign language
IVA'10 Proceedings of the 10th international conference on Intelligent virtual agents
Visual attention and eye gaze during multiparty conversations with distractions
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
Towards a common framework for multimodal generation: the behavior markup language
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
Implementing expressive gesture synthesis for embodied conversational agents
GW'05 Proceedings of the 6th international conference on Gesture in Human-Computer Interaction and Simulation
Introduction to the special issue on affective interaction in natural environments
ACM Transactions on Interactive Intelligent Systems (TiiS) - Special Issue on Affective Interaction in Natural Environments
Effect of presenting video as a baseline during an american sign language animation user study
Proceedings of the 14th international ACM SIGACCESS conference on Computers and accessibility
Combining emotion and facial nonmanual signals in synthesized american sign language
Proceedings of the 14th international ACM SIGACCESS conference on Computers and accessibility
Toward a motor theory of sign language perception
GW'11 Proceedings of the 9th international conference on Gesture and Sign Language in Human-Computer Interaction and Embodied Communication
Effect of Displaying Human Videos During an Evaluation Study of American Sign Language Animation
ACM Transactions on Accessible Computing (TACCESS)
Evaluating facial expressions in american sign language animations for accessible online information
UAHCI'13 Proceedings of the 7th international conference on Universal Access in Human-Computer Interaction: design methods, tools, and interaction techniques for eInclusion - Volume Part I
Multimodal synthesizer for russian and czech sign languages and audio-visual speech
UAHCI'13 Proceedings of the 7th international conference on Universal Access in Human-Computer Interaction: design methods, tools, and interaction techniques for eInclusion - Volume Part I
Collecting and evaluating the CUNY ASL corpus for research on American Sign Language animation
Computer Speech and Language
Hi-index | 0.00 |
In this article we present a multichannel animation system for producing utterances signed in French Sign Language (LSF) by a virtual character. The main challenges of such a system are simultaneously capturing data for the entire body, including the movements of the torso, hands, and face, and developing a data-driven animation engine that takes into account the expressive characteristics of signed languages. Our approach consists of decomposing motion along different channels, representing the body parts that correspond to the linguistic components of signed languages. We show the ability of this animation system to create novel utterances in LSF, and present an evaluation by target users which highlights the importance of the respective body parts in the production of signs. We validate our framework by testing the believability and intelligibility of our virtual signer.