Realistic modeling for facial animation
SIGGRAPH '95 Proceedings of the 22nd annual conference on Computer graphics and interactive techniques
BEAT: the Behavior Expression Animation Toolkit
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
Shared reality: spatial intelligence in intuitive user interfaces
Proceedings of the 7th international conference on Intelligent user interfaces
Trainable videorealistic speech animation
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Geometry-based muscle modeling for facial animation
GRIN'01 No description on Graphics interface 2001
Speech driven facial animation
Proceedings of the 2001 workshop on Perceptive user interfaces
Non-verbal cues for discourse structure
ACL '01 Proceedings of the 39th Annual Meeting on Association for Computational Linguistics
Combining Features for Recognizing Emotional Facial Expressions in Static Images
Verbal and Nonverbal Features of Human-Human and Human-Machine Interaction
Spacing and orientation in co-present interaction
COST'09 Proceedings of the Second international conference on Development of Multimodal Interfaces: active Listening and Synchrony
Children's organization of discourse structure through pausing means
NOLISP'05 Proceedings of the 3rd international conference on Non-Linear Analyses and Algorithms for Speech Processing
Speech-driven facial animation with realistic dynamics
IEEE Transactions on Multimedia
Audio/visual mapping with cross-modal hidden Markov models
IEEE Transactions on Multimedia
Hi-index | 0.00 |
Previous research works proved the existence of synchronization between speech and holds in adults and in 9 year old children with a rich linguistic vocabulary and advanced language skills. When and how does this synchrony develop during child language acquisition? Could it be observed also in children younger than 9? The present work aims to answer the above questions reporting on the analysis of narrations produced by three different age groups of Italian children (9, 5 and 3 year olds). Measurements are provided on the amount of synchronization between speech pauses and holds in the three different groups, as a function of the duration of the narrations. The results show that, as far as the reported data concerns, in children, as in adults, holds and speech pauses are to a certain extent synchronized and play similar functions, suggesting that they may be considered as a multi-determined phenomenon exploited by the speaker under the guidance of a unified planning process to satisfy a communicative intention. In addition, considering the role that speech pauses play in communication, we speculate on the possibility that holds may serve to similar purposes supporting the hypothesis that gestures as speech are an expressive resource that can take on different functions depending on the communicative demand. While speech pauses are likely to play the role of signalling mental activation processes aimed at replacing the "old spoken content" of the communicative plan with a new one, holds may signal mental activation processes aimed at replacing the "old visible bodily action" with new ones reflecting the representational and/or propositional contribution of gestures to the new communicative plan.