A synthetic visual environment with hand gesturing and voice input
CHI '89 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Gestures with speech for graphic manipulation
International Journal of Man-Machine Studies
The role of voice in human-machine communication
Voice communication between humans and machines
Multimodal input for computer access and augmentative communication
Assets '96 Proceedings of the second annual ACM conference on Assistive technologies
Mutual disambiguation of recognition errors in a multimodel architecture
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
An Architecture for Unified Dialogue in Distributed Object Systems
TOOLS '98 Proceedings of the Technology of Object-Oriented Languages and Systems
Web Services and Service-Oriented Architecture: The Savvy Manager's Guide
Web Services and Service-Oriented Architecture: The Savvy Manager's Guide
Impromptu: managing networked audio applications for mobile users
Proceedings of the 2nd international conference on Mobile systems, applications, and services
A generic uiml vocabulary for device- and modality independent user interfaces
Proceedings of the 13th international World Wide Web conference on Alternate track papers & posters
Listen-Communicate-Show (LCS): spoken language command of agent-based remote information access
HLT '01 Proceedings of the first international conference on Human language technology research
A mobile multimodal dialogue system for public transportation navigation evaluated
Proceedings of the 8th conference on Human-computer interaction with mobile devices and services
An architecture and applications for speech-based accessibility systems
IBM Systems Journal
Evaluation of predictive text and speech inputs in a multimodal mobile route guidance application
Proceedings of the 10th international conference on Human computer interaction with mobile devices and services
Architecture Model and Tools for Perceptual Dialog Systems
TSD '08 Proceedings of the 11th international conference on Text, Speech and Dialogue
Hi-index | 0.00 |
Mobile devices, such as smartphones, have become powerful enough to implement efficient speech-based and multimodal interfaces, and there is an increasing need for such systems. This chapter gives an overview of design and development issues necessary to implement mobile speech-based and multimodal systems. The chapter reviews infrastructure design solutions that make it possible to distribute the user interface between servers and mobile devices, and support user interface migration from server-based to distributed services. An example is given on how an existing server-based spoken timetable application is turned into a multimodal distributed mobile application.