Semantic video classification by integrating unlabeled samples for classifier training

  • Authors:
  • Jianping Fan;Hangzai Luo

  • Affiliations:
  • UNC-Charlotte, Charlotte, NC;UNC-Charlotte, Charlotte, NC

  • Venue:
  • Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

Semantic video classification has become an active research topic to enable more effective video retrieval and knowledge discovery from large-scale video databases. However, most existing techniques for classifier training require a large number of hand-labeled samples to learn correctly. To address this problem, we have proposed a semi-supervised framework to achieve incremental classifier training by integrating a limited number of labeled samples with a large number of unlabeled samples. Specifically, this emi-supervised framework includes: (a) Modeling the semantic video concepts by using the finite mixture models to approximate the class distributions of the relevant salient objects; (b) Developing an adaptive EM algorithm to integrate the unlabeled samples to achieve parameter estimation and model selection simultaneously; The experimental results in a certain domain of medical videos are also provided.