Visual Prosody: Facial Movements Accompanying Speech
FGR '02 Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition
The effect of head-nod recognition in human-robot conversation
Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction
Development of an android robot for studying human-robot interaction
IEA/AIE'2004 Proceedings of the 17th international conference on Innovations in applied artificial intelligence
Head gestures for perceptual interfaces: The role of context in improving recognition
Artificial Intelligence
Rigid Head Motion in Expressive Speech Animation: Analysis and Synthesis
IEEE Transactions on Audio, Speech, and Language Processing
Effect of robot's active touch on people's motivation
Proceedings of the 6th international conference on Human-robot interaction
Generation of nodding, head tilting and eye gazing for human-robot dialogue interaction
HRI '12 Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction
Expressing a robot's confidence with motion-based artificial subtle expressions
CHI '13 Extended Abstracts on Human Factors in Computing Systems
Hi-index | 0.00 |
Head motion naturally occurs in synchrony with speech and may carry paralinguistic information, such as intention, attitude and emotion, in dialogue communication. With the aim of verifying the relationship between head motion and the dialogue acts carried by speech, analyses were conducted on motion-captured data for several speakers during natural dialogues. The analysis results first confirmed the trends of our previous work, showing that regardless of the speaker, nods frequently occur during speech utterances, not only for expressing dialogue acts such as agreement and affirmation, but also appearing at the last syllable of the phrase, in strong phrase boundaries, especially when the speaker is talking confidently, or expressing interest in the interlocutor's talk. Inter-speaker variability indicated that the frequency of head motion may vary according to the speaker's age or status, while intra-speaker variability indicated that the frequency of head motion also differs depending on the inter-personal relationship with the interlocutor. A simple model for generating nods based on rules inferred from the analysis results was proposed and evaluated in two types of humanoid robots. Subjective scores showed that the proposed model could generate head motions with naturalness comparable to the original motions.