Intelligence without representation
Artificial Intelligence
The role of emotion in believable agents
Communications of the ACM
interactions
International Journal of Human-Computer Studies
Designing SpeechActs: issues in speech user interfaces
CHI '95 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Designing the user interface for speech recognition applications
ACM SIGCHI Bulletin
The persona effect: affective impact of animated pedagogical agents
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
Affective computing
Embodiment in conversational interfaces: Rea
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
The effects of animated characters on anxiety, task performance, and evaluations of user interfaces
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
The limits of speech recognition
Communications of the ACM
Relational agents: a model and implementation of building user trust
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Human conversation as a system framework: designing embodied conversational agents
Embodied conversational agents
Emotion and personality in a conversational agent
Embodied conversational agents
Concurrent vs. post-task usability test ratings
CHI '01 Extended Abstracts on Human Factors in Computing Systems
AutoTutor: A simulation of a human tutor
Cognitive Systems Research
Ontology-based speech act identification in a bilingual dialog system using partial pattern trees
Journal of the American Society for Information Science and Technology
QuizMASter - A Multi-Agent Game-Style Learning Activity
Edutainment '09 Proceedings of the 4th International Conference on E-Learning and Games: Learning by Playing. Game-based Education System Design and Development
A framework for model-based evaluation of spoken dialog systems
SIGdial '08 Proceedings of the 9th SIGdial Workshop on Discourse and Dialogue
Achieving rapport with turn-by-turn, user-responsive emotional coloring
Speech Communication
Hi-index | 0.00 |
The future of human-computer interfaces may include systems which are human-like in abilities and behavior. One particularly interesting aspect of human-to-human communication is the ability of some conversation partners to sensitively pick up on the nuances of the other's utterances, as they shift from moment to moment, and to use this information to subtly adjust responses to express interest, supportiveness, sympathy and the like. This paper reports a model of this ability in the context of a spoken dialog system for a tutoring-like interaction. The system used information about the user's internal state--such as feelings of confidence, confusion, pleasure and dependency--as inferred from the prosody of his utterances and the context, and used this information to select the most appropriate acknowledgement form at each moment. Although straight-forward rating reveals no significant preference for a system with this ability, a clear preference was found when users rated the system after listening to a recording of their interaction with it. This suggests that human-like, real-time sensitivity can be of value in interfaces. The paper further discusses ways to discover and quantify such rules of social interaction, using corpus-based analysis, developer intuitions and feedback from naive judges; and further suggests that the technique of "evaluation after re-listening" is useful for evaluating spoken dialog systems which operate at near-human levels of performance.