Socializing the intelligent tutor: bringing empathy to computer tutors
Learning Issues for Intelligent Tutoring Systems
Implementation of motivational tactics in tutoring systems
Journal of Artificial Intelligence in Education
Learning companion systems, social learning systems, and the global social learning club
Journal of Artificial Intelligence in Education
Wizard of Oz studies—why and how
Readings in intelligent user interfaces
The elements of computer credibility
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Should I Teach My Computer Peer? Some Issues in Teaching a Learning Companion
ITS '00 Proceedings of the 5th International Conference on Intelligent Tutoring Systems
International Journal of Artificial Intelligence in Education
Towards Systems That Care: A Conceptual Framework based on Motivation, Metacognition and Affect
International Journal of Artificial Intelligence in Education
Hi-index | 0.00 |
Many interactive systems in everyday use carry out roles that are also performed - or have previously been performed - by human beings. Our expectations of how such systems will and, more importantly, should, behave is tempered both by our experience of how humans normally perform in those roles and by our experience and beliefs about what it is possible and reasonable for machines to do. So, an important factor underpinning the acceptability of such systems is the plausibility with which the role they are performing is viewed by their users.We identify three kinds of potential plausibility issue, depending on whether (i) the system is seen by its users to be a machine acting in its own right, or (ii) the machine is seen to be a proxy, either acting on behalf of a human or providing a channel of communication to a human, or (iii) the status of the machine is unclear between the first two cases.