Context-based video retrieval system for the life-log applications

  • Authors:
  • Tetsuro Hori;Kiyoharu Aizawa

  • Affiliations:
  • The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan;The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan

  • Venue:
  • MIR '03 Proceedings of the 5th ACM SIGMM international workshop on Multimedia information retrieval
  • Year:
  • 2003

Quantified Score

Hi-index 0.02

Visualization

Abstract

Recently, we have often heard the terms "Wearable computing" and "Ubiquitous computing". Our expectation for the future of such new computing environments is growing. One of the characteristics of these computing environments is that they embed computers in our lives. In such environments, digitization of personal experiences will be made possible by continuous recordings using a wearable video camera[6, 7]. This could lead to the "automatic life-log application". However, it is evident that the resulting amount of video content will be enormous. Accordingly, to retrieve and browse desired scenes, a vast quantity of video data must be organized using structural information.In this paper, we attempt to develop a "context-based video retrieval system for life-log applications". This wearable system is capable of continuously capturing data not only from a wearable camera and a microphone, but also from various kinds of sensors such as a brain-wave analyzer, a GPS receiver, an acceleration sensor, and a gyro sensor to extract the user's contexts. In addition, the system provides functions that make efficient video browsing and retrieval possible by using data from these sensors and some databases. For example, we can use the following query using this system. "I talked with Kenji while walking at a shopping center in Shinjuku on a cloudy day in mid-May. The conversation was very interesting! I want to see the video of our outing to remember the contents of the conversation."