Attention, intentions, and the structure of discourse
Computational Linguistics
Computers as theatre
SIGGRAPH '94 Proceedings of the 21st annual conference on Computer graphics and interactive techniques
My partner is a real dog: cooperation with social agents
CSCW '96 Proceedings of the 1996 ACM conference on Computer supported cooperative work
BodyChat: autonomous communicative behaviors in avatars
AGENTS '98 Proceedings of the second international conference on Autonomous agents
Embodiment in conversational interfaces: Rea
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Eye gaze patterns in conversations: there is more to conversational agents than meets the eyes
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
The impact of eye gaze on communication using humanoid avatars
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Multimodal human discourse: gesture and speech
ACM Transactions on Computer-Human Interaction (TOCHI)
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Where to look: a study of human-robot engagement
Proceedings of the 9th international conference on Intelligent user interfaces
The intonational structuring of discourse
ACL '86 Proceedings of the 24th annual meeting on Association for Computational Linguistics
Cues and control in expert-client dialogues
ACL '88 Proceedings of the 26th annual meeting on Association for Computational Linguistics
Experimental evaluation of polite interaction tactics for pedagogical agents
Proceedings of the 10th international conference on Intelligent user interfaces
Non-verbal cues for discourse structure
ACL '01 Proceedings of the 39th Annual Meeting on Association for Computational Linguistics
Intonational features of local and global discourse structure
HLT '91 Proceedings of the workshop on Speech and Natural Language
Lecture Notes in Computer Science
Museum guide robot based on sociological interaction analysis
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
An assessment of eye-gaze potential within immersive virtual environments
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
Integrating vision and audition within a cognitive architecture to track conversations
Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction
Precision timing in human-robot interaction: coordination of head movement and utterance
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
The Politeness Effect in an Intelligent Foreign Language Tutoring System
ITS '08 Proceedings of the 9th international conference on Intelligent Tutoring Systems
Proceedings of the 2008 ACM conference on Computer supported cooperative work
Nonverbal robot-group interaction using an imitated gaze cue
Proceedings of the 6th international conference on Human-robot interaction
Robot behavior toolkit: generating effective social behaviors for robots
HRI '12 Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction
ICSR'12 Proceedings of the 4th international conference on Social Robotics
A transition model for cognitions about agency
Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction
Are you looking at me?: perception of robot attention is mediated by gaze type and group size
Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction
Designing engagement-aware agents for multiparty conversations
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Machine learning for interactive systems and robots: a brief introduction
Proceedings of the 2nd Workshop on Machine Learning for Interactive Systems: Bridging the Gap Between Perception, Action and Communication
Conversational gaze aversion for humanlike robots
Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction
Learning-based modeling of multimodal behaviors for humanlike robots
Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction
Hi-index | 0.00 |
During conversations, speakers employ a number of verbal and nonverbal mechanisms to establish who participates in the conversation, when, and in what capacity. Gaze cues and mechanisms are particularly instrumental in establishing the participant roles of interlocutors, managing speaker turns, and signaling discourse structure. If humanlike robots are to have fluent conversations with people, they will need to use these gaze mechanisms effectively. The current work investigates people's use of key conversational gaze mechanisms, how they might be designed for and implemented in humanlike robots, and whether these signals effectively shape human-robot conversations. We focus particularly on whether humanlike gaze mechanisms might help robots signal different participant roles, manage turn-exchanges, and shape how interlocutors perceive the robot and the conversation. The evaluation of these mechanisms involved 36 trials of three-party human-robot conversations. In these trials, the robot used gaze mechanisms to signal to its conversational partners their roles either of two addressees, an addressee and a bystander, or an addressee and a nonparticipant. Results showed that participants conformed to these intended roles 97% of the time. Their conversational roles affected their rapport with the robot, feelings of groupness with their conversational partners, and attention to the task.