A saliency-based method of simulating visual attention in virtual scenes
Proceedings of the 16th ACM Symposium on Virtual Reality Software and Technology
Lie tracking: social presence, truth and deception in avatar-mediated telecommunication
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Clearspace: mixed reality virtual teamrooms
Proceedings of the 2011 international conference on Virtual and mixed reality: systems and applications - Volume Part II
DS-RT '11 Proceedings of the 2011 IEEE/ACM 15th International Symposium on Distributed Simulation and Real Time Applications
SphereAvatar: a situated display to represent a remote collaborator
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Controlling an avatar's pointing gestures in desktop collaborative virtual environments
Proceedings of the 17th ACM international conference on Supporting group work
Hi-index | 0.00 |
In face-to-face collaboration, eye gaze is used both as a bidirectional signal to monitor and indicate focus of attention and action, as well as a resource to manage the interaction. In remote interaction supported by Immersive Collaborative Virtual Environments (ICVEs), embodied avatars representing and controlled by each participant share a virtual space. We report on a study designed to evaluate methods of avatar eye gaze control during an object-focused puzzle scenario performed between three networked CAVETM-like systems. We compare tracked gaze, in which avatars' eyes are controlled by head-mounted mobile eye trackers worn by participants, to a gaze model informed by head orientation for saccade generation, and static gaze featuring non-moving eyes. We analyse task performance, subjective user experience, and interactional behaviour. While not providing statistically significant benefit over static gaze, tracked gaze is observed as the highest performing condition. However, the gaze model resulted in significantly lower task performance and increased error rate.