Motion and attention in a kinetic videoconferencing proxy
INTERACT'11 Proceedings of the 13th IFIP TC 13 international conference on Human-computer interaction - Volume Part I
Taming Mona Lisa: Communicating gaze faithfully in 2D and 3D facial projections
ACM Transactions on Interactive Intelligent Systems (TiiS)
Animated faces for robotic heads: gaze and beyond
COST'10 Proceedings of the 2010 international conference on Analysis of Verbal and Nonverbal Communication and Enactment
Design considerations of a robotic head for telepresence applications
ICIRA'12 Proceedings of the 5th international conference on Intelligent Robotics and Applications - Volume Part III
Furhat: a back-projected human-like robot head for multiparty human-machine interaction
COST'11 Proceedings of the 2011 international conference on Cognitive Behavioural Systems
BeThere: 3D mobile collaboration with spatial input
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
A multimodal person-following system for telepresence applications
Proceedings of the 19th ACM Symposium on Virtual Reality Software and Technology
Augmenting physical avatars using projector-based illumination
ACM Transactions on Graphics (TOG)
Hi-index | 0.00 |
Applications such as telepresence and training involve the display of real or synthetic humans to multiple viewers. When attempting to render the humans with conventional displays, non-verbal cues such as head pose, gaze direction, body posture, and facial expression are difficult to convey correctly to all viewers. In addition, a framed image of a human conveys only a limited physical sense of presence—primarily through the display's location. While progress continues on articulated robots that mimic humans, the focus has been on the motion and behavior of the robots. We introduce a new approach for robotic avatars of real people: the use of cameras and projectors to capture and map the dynamic motion and appearance of a real person onto a humanoid animatronic model. We call these devices animatronic Shader Lamps Avatars (SLA).We present a proof-of-concept prototype comprised of a camera, a tracking system, a digital projector, and a life-sized styrofoam head mounted on a pan-tilt unit. The system captures imagery of a moving, talking user and maps the appearance and motion onto the animatronic SLA, delivering a dynamic, real-time representation of the user to multiple viewers.