Video event retrieval from a small number of examples using rough set theory

  • Authors:
  • Kimiaki Shirahama;Yuta Matsuoka;Kuniaki Uehara

  • Affiliations:
  • Graduate School of Economics, Kobe University, Nada, Kobe, Japan;Graduate School of System Informatics, Kobe University, Nada, Kobe, Japan;Graduate School of System Informatics, Kobe University, Nada, Kobe, Japan

  • Venue:
  • MMM'11 Proceedings of the 17th international conference on Advances in multimedia modeling - Volume Part I
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we develop an example-based event retrieval method which constructs a model for retrieving events of interest in a video archive, by using examples provided by a user. But, this is challenging because shots of an event are characterized by significantly different features, due to camera techniques, settings and so on. That is, the video archive contains a large variety of shots of the event, while the user can only provide a small number of examples. Considering this, we use "rough set theory" to capture various characteristics of the event. Specifically, by using rough set theory, we can extract classification rules which can correctly identify different subsets of positive examples. Furthermore, in order to extract a larger variety of classification rules, we incorporate "bagging" and "random subspace method" into rough set theory. Here, we define indiscernibility relations among examples based on outputs of classifiers, built on different subsets of examples and different subsets of feature dimensions. Experimental results on TRECVID 2009 video data validate the effectiveness of our example-based event retrieval method.