Movement Phase in Signs and Co-Speech Gestures, and Their Transcriptions by Human Coders
Proceedings of the International Gesture Workshop on Gesture and Sign Language in Human-Computer Interaction
Journal of Cognitive Neuroscience
The Role of Iconic Gestures in Speech Disambiguation: ERP Evidence
Journal of Cognitive Neuroscience
N400 to semantically anomalous pictures and words
Journal of Cognitive Neuroscience
Journal of Cognitive Neuroscience
Journal of Cognitive Neuroscience
Generating finely synchronized gesture and speech for humanoid robots: a closed-loop approach
Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction
Automatic processing of irrelevant co-speech gestures with human but not robot actors
Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction
Guest Editorial: Gesture and speech in interaction: An overview
Speech Communication
Hi-index | 0.02 |
During face-to-face communication, one does not only hear speech but also see a speaker's communicative hand movements. It has been shown that such hand gestures play an important role in communication where the two modalities influence each other's interpretation. A gesture typically temporally overlaps with coexpressive speech, but the gesture is often initiated before (but not after) the coexpressive speech. The present ERP study investigated what degree of asynchrony in the speech and gesture onsets are optimal for semantic integration of the concurrent gesture and speech. Videos of a person gesturing were combined with speech segments that were either semantically congruent or incongruent with the gesture. Although gesture and speech always overlapped in time, gesture and speech were presented with three different degrees of asynchrony. In the SOA 0 condition, the gesture onset and the speech onset were simultaneous. In the SOA 160 and 360 conditions, speech was delayed by 160 and 360 msec, respectively. ERPs time locked to speech onset showed a significant difference between semantically congruent versus incongruent gesture-speech combinations on the N400 for the SOA 0 and 160 conditions. No significant difference was found for the SOA 360 condition. These results imply that speech and gesture are integrated most efficiently when the differences in onsets do not exceed a certain time span because of the fact that iconic gestures need speech to be disambiguated in a way relevant to the speech context.