Speech patterns in video-mediated conversations
CHI '92 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
New technological windows into mind: there is more in eyes and brains for human-computer interaction
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
The GAZE groupware system: mediating joint attention in multiparty communication and collaboration
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Embodiment in conversational interfaces: Rea
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Why conversational agents should catch the eye
CHI '00 Extended Abstracts on Human Factors in Computing Systems
Messages embedded in gaze of interface agents --- impression management with agent's gaze
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Evaluating look-to-talk: a gaze-aware interface in a collaborative environment
CHI '02 Extended Abstracts on Human Factors in Computing Systems
Gaze behavior of talking faces makes a difference
CHI '02 Extended Abstracts on Human Factors in Computing Systems
Head orientation and gaze direction in meetings
CHI '02 Extended Abstracts on Human Factors in Computing Systems
Designing attentive interfaces
ETRA '02 Proceedings of the 2002 symposium on Eye tracking research & applications
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Explaining effects of eye gaze on mediated group conversations:: amount or synchronization?
CSCW '02 Proceedings of the 2002 ACM conference on Computer supported cooperative work
Communications of the ACM
GAZE-2: conveying eye contact in group video conferencing using eye-controlled camera direction
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Face-Responsive Interfaces: From Direct Manipulation to Perceptive Presence
UbiComp '02 Proceedings of the 4th international conference on Ubiquitous Computing
EyePliances: attention-seeking devices that respond to visual attention
CHI '03 Extended Abstracts on Human Factors in Computing Systems
Just blink your eyes: a head-free gaze tracking system
CHI '03 Extended Abstracts on Human Factors in Computing Systems
Are you looking at me? Eye contact and desktop video conferencing
ACM Transactions on Computer-Human Interaction (TOCHI)
The researcher's dilemma: evaluating trust in computer-mediated communication
International Journal of Human-Computer Studies - Special issue: Trust and technology
Where is "it"? Event Synchronization in Gaze-Speech Input Systems
Proceedings of the 5th international conference on Multimodal interfaces
Identifying the addressee in human-human-robot interactions based on head pose and speech
Proceedings of the 6th international conference on Multimodal interfaces
Automated Eye Motion Using Texture Synthesis
IEEE Computer Graphics and Applications
Conversing with the user based on eye-gaze patterns
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
eyeView: focus+context views for large group video conferences
CHI '05 Extended Abstracts on Human Factors in Computing Systems
OverHear: augmenting attention in remote social gatherings through computer-mediated hearing
CHI '05 Extended Abstracts on Human Factors in Computing Systems
Media eyepliances: using eye tracking for remote control focus selection of appliances
CHI '05 Extended Abstracts on Human Factors in Computing Systems
Catch me if you can: exploring lying agents in social settings
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
Gamble v2.0: social interactions with multiple users
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
Analyzing and predicting focus of attention in remote collaborative tasks
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
Using social geometry to manage interruptions and co-worker attention in office environments
GI '05 Proceedings of Graphics Interface 2005
Direction of attention perception for conversation initiation in virtual environments
Lecture Notes in Computer Science
A model of attention and interest using Gaze behavior
Lecture Notes in Computer Science
Lecture Notes in Computer Science
Proceedings of the 2005 joint conference on Smart objects and ambient intelligence: innovative context-aware services: usages and technologies
Modeling gaze behavior for a 3D ECA in a dialogue situation
Proceedings of the 11th international conference on Intelligent user interfaces
AuraOrb: social notification appliance
CHI '06 Extended Abstracts on Human Factors in Computing Systems
'User as assessor' approach to embodied conversational agents
From brows to trust
Proceedings of the 8th international conference on Multimodal interfaces
Recognizing gaze aversion gestures in embodied conversational discourse
Proceedings of the 8th international conference on Multimodal interfaces
OZCHI '06 Proceedings of the 18th Australia conference on Computer-Human Interaction: Design: Activities, Artefacts and Environments
AuraOrb: using social awareness cues in the design of progressive notification appliances
OZCHI '06 Proceedings of the 18th Australia conference on Computer-Human Interaction: Design: Activities, Artefacts and Environments
Gaze-based infotainment agents
Proceedings of the international conference on Advances in computer entertainment technology
The conductor interaction method
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
Simultaneous prediction of dialog acts and address types in three-party conversations
Proceedings of the 9th international conference on Multimodal interfaces
Multimodalcues for addressee-hood in triadic communication with a human information retrieval agent
Proceedings of the 9th international conference on Multimodal interfaces
Influencing social dynamics in meetings through a peripheral display
Proceedings of the 9th international conference on Multimodal interfaces
Integrating vision and audition within a cognitive architecture to track conversations
Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction
"She is just stupid"-Analyzing user-agent interactions in emotional game situations
Interacting with Computers
Providing expressive gaze to virtual animated characters in interactive applications
Computers in Entertainment (CIE) - SPECIAL ISSUE: Media Arts
Dynamic Bayesian network based interest estimation for visual attentive presentation agents
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
Highly Realistic 3D Presentation Agents with Visual Attention Capability
SG '07 Proceedings of the 8th international symposium on Smart Graphics
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
Designing Socially Aware Conversational Agents
PIT '08 Proceedings of the 4th IEEE tutorial and research workshop on Perception and Interactive Technologies for Speech-Based Systems: Perception in Multimodal Dialogue Systems
As go the feet...: on the estimation of attentional focus from stance
ICMI '08 Proceedings of the 10th international conference on Multimodal interfaces
A Fitts Law comparison of eye tracking and manual input in the selection of visual targets
ICMI '08 Proceedings of the 10th international conference on Multimodal interfaces
Footing in human-robot conversations: how robots might shape participant roles using gaze cues
Proceedings of the 4th ACM/IEEE international conference on Human robot interaction
Making agents gaze naturally - does it work?
Proceedings of the Working Conference on Advanced Visual Interfaces
The Attentive Hearing Aid: Eye Selection of Auditory Sources for Hearing Impaired Users
INTERACT '09 Proceedings of the 12th IFIP TC 13 International Conference on Human-Computer Interaction: Part I
Social signal processing: Survey of an emerging domain
Image and Vision Computing
Investigating the use of visual focus of attention for audio-visual speaker diarisation
MM '09 Proceedings of the 17th ACM international conference on Multimedia
Multimodal end-of-turn prediction in multi-party meetings
Proceedings of the 2009 international conference on Multimodal interfaces
Implementing a Multi-user Tour Guide System with an Embodied Conversational Agent
AMT '09 Proceedings of the 5th International Conference on Active Media Technology
Models for multiparty engagement in open-world dialog
SIGDIAL '09 Proceedings of the SIGDIAL 2009 Conference: The 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Learning to predict engagement with a spoken dialog system in open-world settings
SIGDIAL '09 Proceedings of the SIGDIAL 2009 Conference: The 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Attention and interaction control in a human-human-computer dialogue setting
SIGDIAL '09 Proceedings of the SIGDIAL 2009 Conference: The 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Conditional sequence model for context-based recognition of gaze aversion
MLMI'07 Proceedings of the 4th international conference on Machine learning for multimodal interaction
Communicating with multiple users for embodied conversational agents in quiz game context
International Journal of Intelligent Information and Database Systems
Multimodal support for social dynamics in co-located meetings
Personal and Ubiquitous Computing
International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction
Interacting with a gaze-aware virtual character
Proceedings of the 2010 workshop on Eye gaze in intelligent human machine interaction
Identifying utterances addressed to an agent in multiparty human-agent conversations
IVA'11 Proceedings of the 10th international conference on Intelligent virtual agents
ICMI '11 Proceedings of the 13th international conference on multimodal interfaces
Conversational gaze mechanisms for humanlike robots
ACM Transactions on Interactive Intelligent Systems (TiiS)
Visual attention and eye gaze during multiparty conversations with distractions
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
Real-Time feedback on nonverbal behaviour to enhance social dynamics in small group meetings
MLMI'05 Proceedings of the Second international conference on Machine Learning for Multimodal Interaction
Gamble — a multiuser game with an embodied conversational agent
ICEC'05 Proceedings of the 4th international conference on Entertainment Computing
Carpe diem: exploring user experience and intimacy in eye-based video conferencing
Proceedings of the 10th International Conference on Mobile and Ubiquitous Multimedia
Engaging in a conversation with synthetic characters along the virtuality continuum
SG'05 Proceedings of the 5th international conference on Smart Graphics
MAWARI: a social interface to reduce the workload of the conversation
ICSR'11 Proceedings of the Third international conference on Social Robotics
ICSR'11 Proceedings of the Third international conference on Social Robotics
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Towards measuring the quality of interaction: communication through telepresence robots
Proceedings of the Workshop on Performance Metrics for Intelligent Systems
Addressee identification for human-human-agent multiparty conversations in different proxemics
Proceedings of the 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction
Proceedings of the 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction
Visual interaction and conversational activity
Proceedings of the 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction
Social network analysis for technology-enhanced learning: review and future directions
International Journal of Technology Enhanced Learning
Are you looking at me?: perception of robot attention is mediated by gaze type and group size
Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction
Hand and eyes: how eye contact is linked to gestures in video conferencing
CHI '13 Extended Abstracts on Human Factors in Computing Systems
Designing engagement-aware agents for multiparty conversations
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Gaze and turn-taking behavior in casual conversational interactions
ACM Transactions on Interactive Intelligent Systems (TiiS) - Special issue on interaction with smart objects, Special section on eye gaze and conversation
Gaze locking: passive eye contact detection for human-object interaction
Proceedings of the 26th annual ACM symposium on User interface software and technology
Proceedings of the 15th ACM on International conference on multimodal interaction
Slaves no longer: review on role assignment for human-robot joint motor action
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
Hi-index | 0.02 |
In multi-agent, multi-user environments, users as well as agents should have a means of establishing who is talking to whom. In this paper, we present an experiment aimed at evaluating whether gaze directional cues of users could be used for this purpose. Using an eye tracker, we measured subject gaze at the faces of conversational partners during four-person conversations. Results indicate that when someone is listening or speaking to individuals, there is indeed a high probability that the person looked at is the person listened (p=88%) or spoken to (p=77%). We conclude that gaze is an excellent predictor of conversational attention in multiparty conversations. As such, it may form a reliable source of input for conversational systems that need to establish whom the user is speaking or listening to. We implemented our findings in FRED, a multi-agent conversational system that uses eye input to gauge which agent the user is listening or speaking to.