How do users know what to say?
interactions
Predicting human interruptibility with sensors: a Wizard of Oz feasibility study
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
PADKK '00 Proceedings of the 4th Pacific-Asia Conference on Knowledge Discovery and Data Mining, Current Issues and New Applications
IEEE Transactions on Pattern Analysis and Machine Intelligence
Lilsys: Sensing Unavailability
CSCW '04 Proceedings of the 2004 ACM conference on Computer supported cooperative work
Flexible guidance generation using user model in spoken dialogue systems
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1
Natural behavior of a listening agent
Lecture Notes in Computer Science
Adaptation and user expertise modelling in AthosMail
Universal Access in the Information Society
User modeling and adaptation in health promotion dialogs with an animated character
Journal of Biomedical Informatics - Special issue: Dialog systems for health communications
Incremental Multimodal Feedback for Conversational Agents
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
How to approach humans?: strategies for social robots to initiate interaction
Proceedings of the 4th ACM/IEEE international conference on Human robot interaction
Proceedings of the 3rd ACM International Workshop on Context-Awareness for Self-Managing Systems
User simulations for context-sensitive speech recognition in spoken dialogue systems
EACL '09 Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics
A finite-state turn-taking model for spoken dialog systems
NAACL '09 Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics
The Hidden Information State model: A practical framework for POMDP-based spoken dialogue management
Computer Speech and Language
A personalized system for conversational recommendations
Journal of Artificial Intelligence Research
Spoken tutorial dialogue and the feeling of another's knowing
SIGDIAL '09 Proceedings of the SIGDIAL 2009 Conference: The 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue
A probabilistic multimodal approach for predicting listener backchannels
Autonomous Agents and Multi-Agent Systems
Integrating language, vision and action for human robot dialog systems
UAHCI'07 Proceedings of the 4th international conference on Universal access in human-computer interaction: ambient interaction
Evaluation of facial direction estimation from cameras for multi-modal spoken dialog system
IWSDS'10 Proceedings of the Second international conference on Spoken dialogue systems for ambient environments
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Hi-index | 0.00 |
This paper describes a method for estimating the internal state of a user of a spoken dialog system before his/her first input utterance. When actually using a dialog-based system, the user is often perplexed by the prompt. A typical system provides more detailed information to a user who is taking time to make an input utterance, but such assistance is nuisance if the user is merely considering how to answer the prompt. To respond appropriately, the spoken dialog system should be able to consider the user's internal state before the user's input. Conventional studies on user modeling have focused on the linguistic information of the utterance for estimating the user's internal state, but this approach cannot estimate the user's state until the end of the user's first utterance. Therefore, we focused on the user's nonverbal output such as fillers, silence, or head-moving until the beginning of the input utterance. The experimental data was collected on a Wizard of Oz basis, and the labels were decided by five evaluators. Finally, we conducted a discrimination experiment with the trained user model using combined features. As a three-class discrimination result, we obtained about 85% accuracy in an open test.