ClearBoard: a seamless medium for shared drawing and conversation with eye contact
CHI '92 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
VRST '97 Proceedings of the ACM symposium on Virtual reality software and technology
ACM Transactions on Computer-Human Interaction (TOCHI)
The GAZE groupware system: mediating joint attention in multiparty communication and collaboration
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Object-focused interaction in collaborative virtual environments
ACM Transactions on Computer-Human Interaction (TOCHI) - Special issue on human-computer interaction and collaborative virtual environments
The impact of eye gaze on communication using humanoid avatars
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Collaborative Virtual Design Environments: Introduction
Communications of the ACM
3D Animation of Telecollaborative Anthropomorphic Avatars
Communications of the ACM
Perceptual gaze extent & level of detail in VR: looking outside the box
ACM SIGGRAPH 2002 conference abstracts and applications
Use of eye movements as feedforward training for a synthetic aircraft inspection task
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Efficient eye pointing with a fisheye lens
GI '05 Proceedings of Graphics Interface 2005
Where are you pointing?: the accuracy of deictic pointing in CVEs
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Measuring and visualizing attention in space with 3D attention volumes
Proceedings of the Symposium on Eye Tracking Research and Applications
Controlling an avatar's pointing gestures in desktop collaborative virtual environments
Proceedings of the 17th ACM international conference on Supporting group work
Proceedings of the Symposium on Eye Tracking Research and Applications
Hi-index | 0.00 |
This paper evaluates the use of Visual Deictic Reference (VDR) in Collaborative Virtual Environments (CVEs). A simple CVE capable of hosting two (or more) participants simultaneously immersed in the same virtual environment is used as the testbed. One participant's VDR, obtained by tracking the participant's gaze, is projected to co-participants' environments in real-time as a colored lightspot. We compare the VDR lightspot when it is eye-slaved to when it is head-slaved and show that an eye-slaved VDR helps disambiguate the deictic point of reference, especially during conditions when the user's line of sight is decoupled from their head direction.