Making them move
SIGGRAPH '94 Proceedings of the 21st annual conference on Computer graphics and interactive techniques
Qualitative Representation of Spatial Knowledge
Qualitative Representation of Spatial Knowledge
A Model of Nonverbal Communication and Interpersonal Relationship Between Virtual Actors
CA '96 Proceedings of the Computer Animation
Virtual Humans for Animation, Ergonomics, and Simulation
NAM '97 Proceedings of the 1997 IEEE Workshop on Motion of Non-Rigid and Articulated Objects (NAM '97)
SignSynth: A Sign Language Synthesis Application Using Web3D and Perl
GW '01 Revised Papers from the International Gesture Workshop on Gesture and Sign Languages in Human-Computer Interaction
Synthetic Animation of Deaf Signing Gestures
GW '01 Revised Papers from the International Gesture Workshop on Gesture and Sign Languages in Human-Computer Interaction
Models with Biological Relevance to Control Anthropomorphic Limbs: A Survey
GW '01 Revised Papers from the International Gesture Workshop on Gesture and Sign Languages in Human-Computer Interaction
Providing signed content on the Internet by synthesized animation
ACM Transactions on Computer-Human Interaction (TOCHI)
Adaptive sampling of motion trajectories for discrete task-based analysis and synthesis of gesture
GW'05 Proceedings of the 6th international conference on Gesture in Human-Computer Interaction and Simulation
Toward a motor theory of sign language perception
GW'11 Proceedings of the 9th international conference on Gesture and Sign Language in Human-Computer Interaction and Embodied Communication
Hi-index | 0.00 |
This paper describes a system called GeSsyCa which is able to produce synthetic sign language gestures from a high level specification. This specification is made with a language based both on a discrete description of space, and on a movement decomposition inspired from sign language gestures. Communication gestures are represented through symbolic commands which can be described by qualitative data, and traduced in terms of spatio-temporal targets driving a generation system. Such an approach is possible for the class of generation models controlled through key-points information. The generation model used in our approach is composed of a set of sensori-motor servo-loops. Each of these models resolves in real time the inversion of the servo-loop, from the direct specification of location targets, while satisfying psycho-motor laws of biological movement. The whole control system is applied to the synthesis of communication and sign language gestures, and a validation of the synthesized movements is presented.