Multimodal human discourse: gesture and speech
ACM Transactions on Computer-Human Interaction (TOCHI)
Experimental evaluation of vision and speech based multimodal interfaces
Proceedings of the 2001 workshop on Perceptive user interfaces
Exploiting prosodic structuring of coverbal gesticulation
Proceedings of the 6th international conference on Multimodal interfaces
Hi-index | 0.00 |
Multimodal cues present in human-human dialogue help us to interpret other people's utterances. We undertake an exploratory study into the relationships of multimodal cues and communicative acts, i.e. between what people say and what people do when interacting with one another. If any such patterns are found and can be recognised, they could be exploited to support the understanding of multimodal interaction behaviours and interface design. Initial analysis of lexical categories and hand/arm gestures suggests some categories are more strongly associated with certain gesture types, in particular nouns and pronouns are emphasised in 87% of multimodal production acts involving deictic gestures.