Query-Based video event definition using rough set theory and high-dimensional representation

  • Authors:
  • Kimiaki Shirahama;Chieri Sugihara;Kuniaki Uehara

  • Affiliations:
  • Graduate School of Economics, Kobe University, Kobe, Japan;Graduate School of Engineering, Kobe University, Kobe, Japan;Graduate School of Engineering, Kobe University, Kobe, Japan

  • Venue:
  • MMM'10 Proceedings of the 16th international conference on Advances in Multimedia Modeling
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

In videos, the same event can be taken by different camera techniques and in different situations. So, shots of the event contain significantly different features. In order to collectively retrieve such shots, we introduce a method which defines an event by using “rough set theory”. Specifically, we extract subsets where shots of the event can be correctly discriminated from all other shots. And, we define the event as the union of subsets. But, to perform the above rough set theory, we need both positive and negative examples. Note that for any possible event, it is impossible to label a huge number of shots as positive or negative. Thus, we adopt a “partially supervised learning” approach where an event is defined from a small number of positive examples and a large number of unlabeled examples. In particular, from unlabeled examples, we collect negative examples based on their similarities to positive ones. Here, to appropriately calculate similarities, we use “subspace clustering” which finds clusters in different subspaces of the high-dimensional feature space. Experimental results on TRECVID 2008 video collection validate the effectiveness of our method.