Hidden Markov models for speech recognition
Technometrics
Wizard of Oz studies: why and how
IUI '93 Proceedings of the 1st international conference on Intelligent user interfaces
Parametric Hidden Markov Models for Gesture Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
An empirical methodology for writing user-friendly natural language computer applications
CHI '83 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Proceedings of the 11th international conference on Intelligent user interfaces
Partially observable Markov decision processes for spoken dialog systems
Computer Speech and Language
Olympus: an open-source framework for conversational spoken language interface research
NAACL-HLT-Dialog '07 Proceedings of the Workshop on Bridging the Gap: Academic and Industrial Research in Dialog Technologies
Mixture model POMDPs for efficient handling of uncertainty in dialogue management
HLT-Short '08 Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers
Building effective question answering characters
SigDIAL '06 Proceedings of the 7th SIGdial Workshop on Discourse and Dialogue
WebWOZ: a wizard of oz prototyping framework
Proceedings of the 2nd ACM SIGCHI symposium on Engineering interactive computing systems
Front end analysis of speech recognition: a review
International Journal of Speech Technology
Using collaborative discourse theory to partially automate dialogue tree authoring
IVA'12 Proceedings of the 12th international conference on Intelligent Virtual Agents
Hi-index | 0.00 |
While Graphical User Interfaces (GUI) still represent the most common way of operating modern computing technology, Spoken Dialog Systems (SDS) have the potential to offer a more natural and intuitive mode of interaction. Even though some may say that existing speech recognition is neither reliable nor practical, the success of recent product releases such as Apple's Siri or Nuance's Dragon Drive suggests that language-based interaction is increasingly gaining acceptance. Yet, unlike applications for building GUIs, tools and frameworks that support the design, construction and maintenance of dialog systems are rare. A particular challenge of SDS design is the often complex integration of technologies. Systems usually consist of several components (e.g. speech recognition, language understanding, output generation, etc.), all of which require expertise to deploy them in a given application domain. This paper presents work in progress that aims at supporting this integration process. We propose a framework of components and describe how it may be used to prototype and gradually implement a spoken dialog system without requiring extensive domain expertise.