Towards a general theory of action and time
Artificial Intelligence
On temporal-spatial realism in the virtual reality environment
UIST '91 Proceedings of the 4th annual ACM symposium on User interface software and technology
Reaching for objects in VR displays: lag and frame rate
ACM Transactions on Computer-Human Interaction (TOCHI)
Perceptual Components for Context Aware Computing
UbiComp '02 Proceedings of the 4th international conference on Ubiquitous Computing
Rapid prototyping for spoken dialogue systems
COLING '02 Proceedings of the 19th international conference on Computational linguistics - Volume 1
Meta-user interfaces for ambient spaces
TAMODIA'06 Proceedings of the 5th international conference on Task models and diagrams for users interface design
The ACLD: speech-based just-in-time retrieval of meeting transcripts, documents and websites
Proceedings of the 2010 international workshop on Searching spontaneous conversational speech
A speech-based just-in-time retrieval system using semantic search
HLT '11 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Systems Demonstrations
A just-in-time document retrieval system for dialogues or monologues
SIGDIAL '11 Proceedings of the SIGDIAL 2011 Conference
A lightweight speech detection system for perceptive environments
MLMI'06 Proceedings of the Third international conference on Machine Learning for Multimodal Interaction
Hi-index | 0.00 |
This paper describes the “FAME” multi-modal demonstrator, which integrates multiple communication modes – vision, speech and object manipulation – by combining the physical and virtual worlds to provide support for multi-cultural or multi-lingual communication and problem solving. The major challenges are automatic perception of human actions and understanding of dialogs between people from different cultural or linguistic backgrounds. The system acts as an information butler, which demonstrates context awareness using computer vision, speech and dialog modeling. The integrated computer-enhanced human-to-human communication has been publicly demonstrated at the FORUM2004 in Barcelona and at IST2004 in The Hague. Specifically, the “Interactive Space” described features an “Augmented Table” for multi-cultural interaction, which allows several users at the same time to perform multi-modal, cross-lingual document retrieval of audio-visual documents previously recorded by an “Intelligent Cameraman” during a week-long seminar.