Multiparty videoconferencing at virtual social distance: MAJIC design
CSCW '94 Proceedings of the 1994 ACM conference on Computer supported cooperative work
Leveraging the asymmetric sensitivity of eye contact for videoconference
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
The Visual Hull Concept for Silhouette-Based Image Understanding
IEEE Transactions on Pattern Analysis and Machine Intelligence
GAZE-2: conveying eye contact in group video conferencing using eye-controlled camera direction
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
blue-c: a spatially immersive display and 3D video portal for telepresence
ACM SIGGRAPH 2003 Papers
Real-time compression for dynamic 3D environments
MULTIMEDIA '03 Proceedings of the eleventh ACM international conference on Multimedia
Constructing a Gazebo: supporting teamwork in a tightly coupled, distributed task in virtual reality
Presence: Teleoperators and Virtual Environments
Comparison of head gaze and head and eye gaze within an immersive environment
DS-RT '06 Proceedings of the 10th IEEE international symposium on Distributed Simulation and Real-Time Applications
An assessment of eye-gaze potential within immersive virtual environments
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
Communicating Eye Gaze across a Distance without Rooting Participants to the Spot
DS-RT '08 Proceedings of the 2008 12th IEEE/ACM International Symposium on Distributed Simulation and Real-Time Applications
A polyhedron representation for computer vision
AFIPS '75 Proceedings of the May 19-22, 1975, national computer conference and exposition
VR '09 Proceedings of the 2009 IEEE Virtual Reality Conference
VR '09 Proceedings of the 2009 IEEE Virtual Reality Conference
Proceedings of the ACM 2011 conference on Computer supported cooperative work
Accelerated polyhedral visual hulls using OpenCL
VR '11 Proceedings of the 2011 IEEE Virtual Reality Conference
Hi-index | 0.01 |
A feature of standard video-mediated Communication systems (VMC) is that participants see into each other's spaces from the viewpoint of a camera. Consequently, participants' capacity to use the spatially-based resources that exist in co-located settings (eg the production and comprehension of pointing and eye gaze direction) can be compromised. Whilst positioning cameras close to displays, or switching or interpolating between multiple cameras to provide appropriately aligned views can reduce this problem, an alternative paradigm is the use of immersive projection technology to locate participants within an immersive collaborative virtual environment (ICVE), in which remote participants appear as 3D graphical representations. Two approaches toward representation of remote participants in ICVEs have been studied: embodied avatars animated using participants' tracked body motion, and vision-based techniques that reconstruct 3D models from multiple streams of live video input. Drawing on empirical evaluations of an avatar-based ICVE system that both captures and displays eye-movement, together with an examination of previous research into gaze, we provide a specification of gaze practices and the cues used in the perception of gaze that should be supported in ICVEs. We delineate some of the challenges for vision-based ICVE and discuss the potential for combining different approaches in the development of such systems.