QuickSet: multimodal interaction for distributed applications
MULTIMEDIA '97 Proceedings of the fifth ACM international conference on Multimedia
A multiple device approach for supporting whiteboard-based interactions
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Augmented surfaces: a spatially continuous work space for hybrid computing environments
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Using handhelds and PCs together
Communications of the ACM
Interacting at a distance: measuring the performance of laser pointers and other devices
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Comparing paper and tangible, multimodal tools
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Evaluating look-to-talk: a gaze-aware interface in a collaborative environment
CHI '02 Extended Abstracts on Human Factors in Computing Systems
Visual instruments for an interactive mural
CHI '99 Extended Abstracts on Human Factors in Computing Systems
Interacting at a Distance Using Semantic Snarfing
UbiComp '01 Proceedings of the 3rd international conference on Ubiquitous Computing
“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
Advances in meeting recognition
HLT '01 Proceedings of the first international conference on Human language technology research
A plan-based mission control center for autonomous vehicles
Proceedings of the 9th international conference on Intelligent user interfaces
Challenges in designing interactive systems for emergency response
DIS '06 Proceedings of the 6th conference on Designing Interactive systems
Multimodal user interface facilitating critical data entry for traffic incident management
MMUI '05 Proceedings of the 2005 NICTA-HCSNet Multimodal User Interaction Workshop - Volume 57
Contextual push-to-talk: a new technique for reducing voice dialog duration
Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services
Adding speech recognition support to UML tools
Journal of Visual Languages and Computing
Cooperation in ubiquitous computing: an extended view on sharing
From Integrated Publication and Information Systems to Virtual Information and Knowledge Environments
uEmergency: a collaborative system for emergency management on very large tabletop
Proceedings of the 2012 ACM international conference on Interactive tabletops and surfaces
Hi-index | 0.00 |
We describe our system which facilitates collaboration using multiple modalities, including speech, handwriting, gestures, gaze tracking, direct manipulation, large projected touch-sensitive displays, laser pointer tracking, regular monitors with a mouse and keyboard, and wirelessly-networked handhelds. Our system allows multiple, geographically dispersed participants to simultaneously and flexibly mix different modalities using the right interface at the right time on one or more machines. This paper discusses each of the modalities provided, how they were integrated in the system architecture, and how the user interface enabled one or more people to flexibly use one or more devices.