WebGALAXY: beyond point and click—a conversational interface to a browser
Selected papers from the sixth international conference on World Wide Web
XISL: a language for describing multimodal interaction scenarios
Proceedings of the 5th international conference on Multimodal interfaces
ACM Transactions on Computer-Human Interaction (TOCHI)
The CommandTalk spoken dialogue system
ACL '99 Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics
MATCH: an architecture for multimodal dialogue systems
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Multimodal interaction with xforms
ICWE '06 Proceedings of the 6th international conference on Web engineering
Speech-enabled card games for language learners
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 2
Building multimodal applications with EMMA
Proceedings of the 2009 international conference on Multimodal interfaces
A speech mashup framework for multimodal mobile services
Proceedings of the 2009 international conference on Multimodal interfaces
WebAnywhere: experiences with a new delivery model for access technology
Proceedings of the 2010 International Cross Disciplinary Conference on Web Accessibility (W4A)
Towards a universal accessibility for textual information
Proceedings of the International Cross-Disciplinary Conference on Web Accessibility
I-SEARCH: a multimodal search engine based on rich unified content description (RUCoD)
Proceedings of the 21st international conference companion on World Wide Web
IrisTK: a statechart-based toolkit for multi-party face-to-face interaction
Proceedings of the 14th ACM international conference on Multimodal interaction
Landmark-based location belief tracking in a spoken dialog system
SIGDIAL '12 Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Deploying speech interfaces to the masses
Proceedings of the companion publication of the 2013 international conference on Intelligent user interfaces companion
Adapting arcade games for learning
CHI '13 Extended Abstracts on Human Factors in Computing Systems
Hi-index | 0.00 |
Many compelling multimodal prototypes have been developed which pair spoken input and output with a graphical user interface, yet it has often proved difficult to make them available to a large audience. This unfortunate reality limits the degree to which authentic user interactions with such systems can be collected and subsequently analyzed. We present the WAMI toolkit, which alleviates this difficulty by providing a framework for developing, deploying, and evaluating Web-Accessible Multimodal Interfaces in which users interact using speech, mouse, pen, and/or touch. The toolkit makes use of modern web-programming techniques, enabling the development of browser-based applications which rival the quality of traditional native interfaces, yet are available on a wide array of Internet-connected devices. We will showcase several sophisticated multimodal applications developed and deployed using the toolkit, which are available via desktop, laptop, and tablet PCs, as well as via several mobile devices. In addition, we will discuss resources provided by the toolkit for collecting, transcribing, and annotating usage data from multimodal user interactions.