Intelligent multi-media interface technology
Intelligent user interfaces
The logic of typed feature structures
The logic of typed feature structures
Integrating simultaneous input from speech, gaze, and hand gestures
Intelligent multimedia interfaces
A generic platform for addressing the multimodal challenge
CHI '95 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
QuickSet: multimodal interaction for distributed applications
MULTIMEDIA '97 Proceedings of the fifth ACM international conference on Multimedia
Reinventing the familiar: exploring an augmented reality design space for air traffic control
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Bridging physical and virtual worlds with electronic tags
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Something from nothing: augmenting a paper-based work practice via multimodal interaction
DARE '00 Proceedings of DARE 2000 on Designing augmented reality environments
The ins and outs of collaborative walls: demonstrating the collaborage concept
CHI '99 Extended Abstracts on Human Factors in Computing Systems
“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
The Adaptive Agent Architecture: Achieving Fault-Tolerance Using Persistent Broker Teams
ICMAS '00 Proceedings of the Fourth International Conference on MultiAgent Systems (ICMAS-2000)
Confirmation in multimodal systems
COLING '98 Proceedings of the 17th international conference on Computational linguistics - Volume 2
Unification-based multimodal parsing
COLING '98 Proceedings of the 17th international conference on Computational linguistics - Volume 1
Multimodal interactive maps: designing for human performance
Human-Computer Interaction
Multimodal integration-a statistical view
IEEE Transactions on Multimedia
Comparing paper and tangible, multimodal tools
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Sketching for military courses of action diagrams
Proceedings of the 8th international conference on Intelligent user interfaces
EDCIS '02 Proceedings of the First International Conference on Engineering and Deployment of Cooperative Information Systems
Real-Time Gesture Recognition by Means of Hybrid Recognizers
GW '01 Revised Papers from the International Gesture Workshop on Gesture and Sign Languages in Human-Computer Interaction
Advances in the robust processing of multimodal speech and pen systems
Multimodal interface for human-machine communication
Perceptual Collaboration in Neem
ICMI '02 Proceedings of the 4th IEEE International Conference on Multimodal Interfaces
A visual modality for the augmentation of paper
Proceedings of the 2001 workshop on Perceptive user interfaces
Designing augmented reality interfaces
ACM SIGGRAPH Computer Graphics - Learning through computer-generated visualization
Just point and click?: using handhelds to interact with paper maps
Proceedings of the 7th international conference on Human computer interaction with mobile devices & services
The Neem Platform: An Evolvable Framework for Perceptual Collaborative Applications
Journal of Intelligent Information Systems
Localisation and Interaction for Augmented Maps
ISMAR '05 Proceedings of the 4th IEEE/ACM International Symposium on Mixed and Augmented Reality
Designers' use of paper and the implications for informal tools
OZCHI '05 Proceedings of the 17th Australia conference on Computer-Human Interaction: Citizens Online: Considerations for Today and the Future
Marked-up maps: combining paper maps and electronic information resources
Personal and Ubiquitous Computing
Proceedings of the working conference on Advanced visual interfaces
Human-centered collaborative interaction
Proceedings of the 1st ACM international workshop on Human-centered multimedia
Collaborative multimodal photo annotation over digital paper
Proceedings of the 8th international conference on Multimodal interfaces
Proceedings of the 8th international conference on Multimodal interfaces
TaPuMa: tangible public map for information acquirement through the things we carry
Proceedings of the 1st international conference on Ambient media and systems
HCI Beyond the GUI: Design for Haptic, Speech, Olfactory, and Other Nontraditional Interfaces
HCI Beyond the GUI: Design for Haptic, Speech, Olfactory, and Other Nontraditional Interfaces
Context shifts: extending the meanings of physical objects with language
Human-Computer Interaction
"Move the couch where?": developing an augmented reality multimodal interface
ISMAR '06 Proceedings of the 5th IEEE and ACM International Symposium on Mixed and Augmented Reality
Multimodal interaction: a new focal area for AI
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 2
Architecture of a framework for generic assisting conversational agents
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
An evaluation of an augmented reality multimodal interface using speech and paddle gestures
ICAT'06 Proceedings of the 16th international conference on Advances in Artificial Reality and Tele-Existence
Hi-index | 0.00 |
Rasa is a tangible augmented reality environment that digitally enhances the existing paper-based command and control capability in a military command post. By observing and understanding the users' speech, pen, and touch-based multimodal language, Rasa computationally augments the physical objects on a command post map, linking these items to digital representations of the same-for example, linking a paper map to the world and Post-itâ notes to military units. Herein, we give a thorough account of Rasa's underlying multiagent framework, and its recognition, understanding, and multimodal integration components. Moreover, we examine five properties of language-generativity, comprehensibility, compositionality, referentiality, and, at times, persistence-that render it suitable as an augmentation approach, contrasting these properties to those of other augmentation methods. It is these properties of language that allow users of Rasa to augment physical objects, transforming them into tangible interfaces.