One is not enough: multiple views in a media space
INTERCHI '93 Proceedings of the INTERCHI '93 conference on Human factors in computing systems
GestureCam: a video communication system for sympathetic remote collaboration
CSCW '94 Proceedings of the 1994 ACM conference on Computer supported cooperative work
Inferring intent in eye-based interfaces: tracing eye movements with process models
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Evaluation of eye gaze interaction
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Coordination of communication: effects of shared visual context on collaborative work
CSCW '00 Proceedings of the 2000 ACM conference on Computer supported cooperative work
GestureMan: a mobile robot that embodies a remote instructor's actions
CSCW '00 Proceedings of the 2000 ACM conference on Computer supported cooperative work
Eye gaze patterns in conversations: there is more to conversational agents than meets the eyes
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Evaluating look-to-talk: a gaze-aware interface in a collaborative environment
CHI '02 Extended Abstracts on Human Factors in Computing Systems
CSCW '02 Proceedings of the 2002 ACM conference on Computer supported cooperative work
Effects of head-mounted and scene-oriented video systems on remote collaboration on physical tasks
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Gaze Tracking for Multimodal Human-Computer Interaction
ICASSP '97 Proceedings of the 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '97) -Volume 4 - Volume 4
Proceedings of the 5th international conference on Multimodal interfaces
Using eye movements to determine referents in a spoken dialogue system
Proceedings of the 2001 workshop on Perceptive user interfaces
Persistence matters: making the most of chat in tightly-coupled work
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Visual information as a conversational resource in collaborative physical tasks
Human-Computer Interaction
Modeling focus of attention for meeting indexing based on multiple cues
IEEE Transactions on Neural Networks
An exploratory analysis of partner action and camera control in a video-mediated collaborative task
CSCW '06 Proceedings of the 2006 20th anniversary conference on Computer supported cooperative work
Proceedings of the 8th international conference on Multimodal interfaces
Sharing a single expert among multiple partners
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Dynamic shared visual spaces: experimenting with automatic camera control in a remote repair task
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Modeling the impact of shared visual information on collaborative reference
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
The effects of explicit referencing in distance problem solving over shared maps
Proceedings of the 2007 international ACM conference on Supporting group work
Exploring interface with representation of gesture for remote collaboration
OZCHI '07 Proceedings of the 19th Australasian conference on Computer-Human Interaction: Entertaining User Interfaces
Proceedings of the 2008 symposium on Eye tracking research & applications
Integrating vision and audition within a cognitive architecture to track conversations
Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Analyzing the kinematics of bivariate pointing
GI '08 Proceedings of graphics interface 2008
Real-time adaptive behaviors in multimodal human-avatar interactions
International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction
A study of gestures in a video-mediated collaborative assembly task
Advances in Human-Computer Interaction
Quantitative evaluation of media space configuration in a task-oriented remote conference system
Proceedings of the 25th Australian Computer-Human Interaction Conference: Augmentation, Application, Innovation, Collaboration
Hi-index | 0.00 |
To overcome the limitations of current technologies for remote collaboration, we propose a system that changes a video feed based on task properties, people's actions, and message properties. First, we examined how participants manage different visual resources in a laboratory experiment using a collaborative task in which one partner (the helper) instructs another (the worker) how to assemble online puzzles. We analyzed helpers' eye gaze as a function of the aforementioned parameters. Helpers gazed at the set of alternative pieces more frequently when it was harder for workers to differentiate these pieces, and less frequently over repeated trials. The results further suggest that a helper's desired focus of attention can be predicted based on task properties, his/her partner's actions, and message properties. We propose a conditional Markov model classifier to explore the feasibility of predicting gaze based on these properties. The accuracy of the model ranged from 65.40% for puzzles with easy-to-name pieces to 74.25% for puzzles with more difficult to name pieces. The results suggest that we can use our model to automatically manipulate video feeds to show what helpers want to see when they want to see it.