ClearBoard: a seamless medium for shared drawing and conversation with eye contact
CHI '92 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Using spatial cues to improve videoconferencing
CHI '92 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
PRoP: personal roving presence
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
HyperMask: virtual reactive faces for storytelling
ACM SIGGRAPH 99 Conference abstracts and applications
ChatScape: a visual informal communication tool in communities
CHI '01 Extended Abstracts on Human Factors in Computing Systems
Autonomous light air vessels (ALAVs)
MULTIMEDIA '06 Proceedings of the 14th annual ACM international conference on Multimedia
Porta-person: telepresence for the connected conference room
CHI '07 Extended Abstracts on Human Factors in Computing Systems
Design of installation with interactive UAVs
ACE '08 Proceedings of the 2008 International Conference on Advances in Computer Entertainment Technology
ACM SIGGRAPH 2009 Emerging Technologies
Embodied social proxy: mediating interpersonal connection in hub-and-satellite teams
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Hi-index | 0.00 |
We have developed a face-to-avatar system that integrates a blimp with a virtual avatar for a unique telepresence system. Our aerotop telepresence system has two advantages comparing with conventional telepresence systems. One is to provide unique communication between user and physical blimp avatar. The blimp works as an avatar and contains several pieces of equipment, including a projector and a speaker. The user's presence is dramatically enhanced compared to using conventional virtual avatars (e.g., CG and images) because the avatar is a physical object that can move freely in the real world. The other is that the user's senses are augmented because the blimp detects dynamic information in the real world. For example, the camera provides the user with a special floating view, and the microphone catches a wide variety of sounds such as conversations and environmental noises. This paper describes our face-to-avatar concept and its implementation.