Collaborative capturing, interpreting, and sharing of experiences

  • Authors:
  • Yasuyuki Sumi;Sadanori Ito;Tetsuya Matsuguchi;Sidney Fels;Shoichiro Iwasawa;Kenji Mase;Kiyoshi Kogure;Norihiro Hagita

  • Affiliations:
  • Graduate School of Infomatics, Kyoto University, Kyoto, Japan and ATR Media Information Science Laboratories, Kyoto, Japan;ATR Media Information Science Laboratories, Kyoto, Japan and Graduate School of Engineering, Tokyo University of Agriculture and Technology, Tokyo, Japan and ATR Intelligent Robotics and Communica ...;University of California, San Francisco, USA;The University of British Columbia, Vancouver, Canada;ATR Media Information Science Laboratories, Kyoto, Japan and ATR Intelligent Robotics and Communication Laboratories, Kyoto, Japan;ATR Media Information Science Laboratories, Kyoto, Japan and ATR Intelligent Robotics and Communication Laboratories, Kyoto, Japan and Information Technology Center, Nagoya University, Nagoya, Jap ...;ATR Media Information Science Laboratories, Kyoto, Japan;ATR Intelligent Robotics and Communication Laboratories, Kyoto, Japan

  • Venue:
  • Personal and Ubiquitous Computing - Memory and Sharing of Experiences
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper proposes a notion of interaction corpus, a captured collection of human behaviors and interactions among humans and artifacts. Digital multimedia and ubiquitous sensor technologies create a venue to capture and store interactions that are automatically annotated. A very large-scale accumulated corpus provides an important infrastructure for a future digital society for both humans and computers to understand verbal/non-verbal mechanisms of human interactions. The interaction corpus can also be used as a well-structured stored experience, which is shared with other people for communication and creation of further experiences. Our approach employs wearable and ubiquitous sensors, such as video cameras, microphones, and tracking tags, to capture all of the events from multiple viewpoints simultaneously. We demonstrate an application of generating a video-based experience summary that is reconfigured automatically from the interaction corpus.