Event retrieval in video archives using rough set theory and partially supervised learning

  • Authors:
  • Kimiaki Shirahama;Yuta Matsuoka;Kuniaki Uehara

  • Affiliations:
  • Graduate School of Economics, Kobe University, Kobe, Japan 657-8501;Graduate School of Engineering, Kobe University, Kobe, Japan 657-8501;Graduate School of System Informatics, Kobe University, Kobe, Japan 657-8501

  • Venue:
  • Multimedia Tools and Applications
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper develops a query-by-example method for retrieving shots of an event (event shots) using example shots provided by a user. The following three problems are mainly addressed. Firstly, event shots cannot be retrieved using a single model as they contain significantly different features due to varied camera techniques, settings and so forth. This is overcome by using rough set theory to extract multiple classification rules with each rule specialized to retrieve a portion of event shots. Secondly, since a user can only provide a small number of example shots, the amount of event shots retrieved by extracted rules is inevitably limited. We thus incorporate bagging and the random subspace method. Classifiers characterize significantly different event shots depending on example shots and feature dimensions. However, this can result in the potential retrieval of many unnecessary shots. Rough set theory is used to combine classifiers into rules which provide greater retrieval accuracy. Lastly, counter example shots, which are a necessity for rough set theory, are not provided by the user. Hence, a partially supervised learning method is used to collect these from shots other than example shots. Counter example shots, which are as similar to example shots as possible, are collected because they are useful for characterizing the boundary between event shots and the remaining shots. The proposed method is tested on TRECVID 2009 video data.