The active badge location system
ACM Transactions on Information Systems (TOIS)
Centering: a framework for modeling the local coherence of discourse
Computational Linguistics
Mutual disambiguation of recognition errors in a multimodel architecture
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Is paper safer? The role of paper flight strips in air traffic control
ACM Transactions on Computer-Human Interaction (TOCHI) - Special issue on interface design for safety-critical interactive systems: when there is no room for user error
Sensing techniques for mobile interaction
UIST '00 Proceedings of the 13th annual ACM symposium on User interface software and technology
Something from nothing: augmenting a paper-based work practice via multimodal interaction
DARE '00 Proceedings of DARE 2000 on Designing augmented reality environments
Creating tangible interfaces by augmenting physical objects with multimodal language
Proceedings of the 6th international conference on Intelligent user interfaces
The efficiency of multimodal interaction for a map-based task
ANLC '00 Proceedings of the sixth conference on Applied natural language processing
Unification-based multimodal parsing
COLING '98 Proceedings of the 17th international conference on Computational linguistics - Volume 1
Human-Computer Interaction
Intelligibility and accountability: human considerations in context-aware systems
Human-Computer Interaction
Context as a dynamic construct
Human-Computer Interaction
Multimodal integration-a statistical view
IEEE Transactions on Multimedia
HCI Beyond the GUI: Design for Haptic, Speech, Olfactory, and Other Nontraditional Interfaces
HCI Beyond the GUI: Design for Haptic, Speech, Olfactory, and Other Nontraditional Interfaces
Introduction to this special issue on context-aware computing
Human-Computer Interaction
Hi-index | 0.00 |
The influence that language has on contextual interpretations cannot be ignored by computer systems that strive to be context aware. Rather, once systems are designed to perceive language and other forms of human action, these interpretative processes will of necessity be context dependent. As an example, we illustrate how people simply and naturally create new contexts by naming and referring. We then describe Rasa, a mixed-reality system that observes and understands how users in a military command post create such contexts as part of the process of maintaining situational awareness. In such environments, commander's maps are covered with Post-it® notes. These paper artifacts are contextually transformed to represent units in the field by the application of multimodal language. Rasa understands this language, thereby allowing paper-based tools to become the basis for digital interaction. Finally, we argue that architectures for such context-aware systems will need to be built to process the inherent ambiguity and uncertainty of human input in order to be effective.