Towards integrated microplanning of language and iconic gesture for multimodal output
Proceedings of the 6th international conference on Multimodal interfaces
The recognition and comprehension of hand gestures: a review and research agenda
ZiF'06 Proceedings of the Embodied communication in humans and machines, 2nd ZiF research group international conference on Modeling communication with robots and virtual humans
Imaginary interfaces: spatial interaction with empty hands and without visual feedback
UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology
Body posture estimation in sign language videos
GW'09 Proceedings of the 8th international conference on Gesture in Embodied Communication and Human-Computer Interaction
Hi-index | 0.00 |
Describing the location of a landmark in ascene typically requires taking a perspective. Descriptions of scenes with several landmarksuse either a route perspective, where theviewpoint is within the scene or a surveyperspective, where the viewpoint is outside, ora mixture of both. Parallel to this, AmericanSign Language (ASL) uses two spatial formats,viewer space, in which the described space isconceived of as in front of the speaker, ordiagrammatic space, in which the describedspace is conceived of as from outside, usuallyabove. In the present study, speakers ofEnglish or ASL described one of two memorizedmaps. ASL signers were more likely to adopt asurvey perspective than English speakers,indicating that language modality can influenceperspective choice. In ASL, descriptions froma survey perspective used diagrammatic space,whereas descriptions from a route perspectiveused viewer space. In English, iconic gesturesaccompanying route descriptions used the full3-D space, similar to viewer space, whilegestures accompanying survey descriptions useda 2-D horizontal or vertical plane similar todiagrammatic space. Thus, the two modes ofexperiencing environments, from within and fromwithout, are expressed naturally in speech,sign, and gesture.