Efficient retrieval of life log based on context and content

  • Authors:
  • Kiyoharu Aizawa;Datchakorn Tancharoen;Shinya Kawasaki;Toshihiko Yamasaki

  • Affiliations:
  • The University of Tokyo, Chiba, Japan;The University of Tokyo, Chiba, Japan;The University of Tokyo, Chiba, Japan;The University of Tokyo, Chiba, Japan

  • Venue:
  • Proceedings of the the 1st ACM workshop on Continuous archival and retrieval of personal experiences
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we present continuous capture of our life log with various sensors plus additional data and propose effective retrieval methods using this context and content. Our life log system contains video, audio, acceleration sensor, gyro, GPS, annotations, documents, web pages, and emails. In our previous studies, we showed our retrieval methodology [8], [9], which mainly depends on context information from sensor data. In this paper, we extend our methodology with additional functions. They are (1) spatio-temporal sampling for extraction of key frames for summarization; and (2) conversation scene detection. With the first of these, key frames for the summarization are extracted using time and location data (GPS). Because our life log captures dense location data, we can also make use of derivatives of location data, that is, speed and acceleration in the movement of the person. The summarizing key frames are made using them. We also introduce content analysis for conversation scene detection. In our previous work, we have investigated context-based retrieval, which differs from the majority of studies in image/video retrieval focusing on content-based retrieval. In this paper, we introduce visual and audio data content analysis for conversation scene detection. The detection of conversation scenes will be very important tags for our life log data retrieval. We describe our present system and additional functions, as well as preliminary results for the additional functions.