Inferring (mal) rules from pupils' protocols
Selected and updated papers from the proceedings of the 1982 European conference on Progress in artificial intelligence
Interface design issues for advice-giving expert systems
Communications of the ACM
CHI '88 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Tailoring object descriptions to a user's level of expertise
Computational Linguistics - Special issue on user modeling
The berkeley UNIX consultant project
Computational Linguistics
The Berkeley UNIX Consultant Project
Artificial Intelligence Review
User interfaces and help systems: from helplessness to intelligent assistance
Artificial Intelligence Review
Natural language systems: How are they meeting human needs?
ACM '83 Proceedings of the 1983 annual conference on Computers : Extending the human resource
Computational linguistics research at the University of Pennsylvania
Computational Linguistics
User models and discourse models
Computational Linguistics - Special issue on user modeling
ACL '84 Proceedings of the 10th International Conference on Computational Linguistics and 22nd annual meeting on Association for Computational Linguistics
The cognitive model: an approach to designing the human-computer interface
ACM SIGCHI Bulletin
Varieties of user misconceptions: detection and correction
IJCAI'83 Proceedings of the Eighth international joint conference on Artificial intelligence - Volume 2
Hi-index | 0.02 |
Because people's knowledge is often partial and/or faulty, it is inevitable that misconceptions will be revealed in the course of a conversation. If recognized, the other person may say something to correct the misconception, and the conversation continues. Just as this is the case when people interact with each other, so must it be when users interact with a computer system. For example, in interacting with an expert system, a user may reveal misconceptions about objects modelled by the system. By failing to correct such misconceptions, the system may not only confirm the original misconception, but may cause the user to develop further misconceptions. It must therefore be up to the system to recognize and respond to misconceptions in an effective way. In this paper the space of possible object misconceptions is characterized according to the kind of incorrect information involved. It has been found that this characterization is often useful in determining how the user arrived at the misconception, and therefore the kind of information to include in the response. Using such a characterization, a system will be able to effectively correct object misconceptions in a domain independent way. Factors which affect the amount of information included in a correction (such as discourse and situational context) are also examined.