Perplexity-based evidential neural network classifier fusion using mpeg-7 low-level visual features

  • Authors:
  • Rachid Benmokhtar;Benoit Huet

  • Affiliations:
  • Institut EURECOM, Valbonne, France;Institut EURECOM, Valbonne, France

  • Venue:
  • MIR '08 Proceedings of the 1st ACM international conference on Multimedia information retrieval
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, an automatic content-based video shot indexing framework is proposed employing five types of MPEG-7 low-level visual features (color, texture, shape, motion and face). Once the set of features representing the video content is determined, the question of how to combine their individual classifier outputs according to each feature to form a final semantic decision of the shot must be addressed, in the goal of bridging the semantic gap between the low level visual feature and the high level semantic concepts. For this aim, a novel approach called "perplexity-based weighted descriptors" is proposed before applying our evidential combiner NNET [3], to obtain an adaptive classifier fusion PENN (Perplexity-based Evidential Neural Network). The experimental results conducted in the framework of the TRECVid'07 high level features extraction task report the efficiency and the improvement provided by the proposed scheme.