Embodiment in conversational interfaces: Rea
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
BEAT: the Behavior Expression Animation Toolkit
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
Finite-state multimodal parsing and understanding
COLING '00 Proceedings of the 18th conference on Computational linguistics - Volume 1
Cultural Specific Effects on the Recognition of Basic Emotions: A Study on Italian Subjects
USAB '09 Proceedings of the 5th Symposium of the Workgroup Human-Computer Interaction and Usability Engineering of the Austrian Computer Society on HCI and Usability for e-Inclusion
IASTED-HCI '07 Proceedings of the Second IASTED International Conference on Human Computer Interaction
A location-aware virtual character in a smart room: effects on performance, presence and adaptivity
Proceedings of the 16th international conference on Intelligent user interfaces
MIKI: a speech enabled intelligent kiosk
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
On speech and gestures synchrony
COST'10 Proceedings of the 2010 international conference on Analysis of Verbal and Nonverbal Communication and Enactment
COST'11 Proceedings of the 2011 international conference on Cognitive Behavioural Systems
Hi-index | 0.00 |
In this paper, we describe an interface that demonstrates spatial intelligence. This interface, an embodied conversational kiosk, builds on research in embodied conversational agents (ECAs) and on information displays in mixed reality and kiosk format. ECAs leverage people's abilities to coordinate information displayed in multiple modalities, particularly information conveyed in speech and gesture. Mixed reality depends on users' interactions with everyday objects that are enhanced with computational overlays. We describe an implementation, MACK (Media lab Autonomous Conversational Kiosk), an ECA who can answer questions about and give directions to the MIT Media Lab's various research groups, projects and people. MACK uses a combination of speech, gesture, and indications on a normal paper map that users place on a table between themselves and MACK. Research issues involve users' differential attention to hand gestures, speech and the map, and how reference using these modalities can be fused in input and generation.