Laplacian adaptive context-based SVM for video concept detection

  • Authors:
  • Wei Jiang;Alexander Loui

  • Affiliations:
  • Eastman Kodak Company, Rochester, NY, USA;Eastman Kodak Company, Rochester, NY, USA

  • Venue:
  • WSM '11 Proceedings of the 3rd ACM SIGMM international workshop on Social media
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Practical semantic concept detection problems usually have the following challenging conditions: the amount of unlabeled test data keeps growing and newly acquired data are incrementally added to the collection; the domain difference between newly acquired data and the original labeled training data is not negligible; and only very limited, or even no, partial annotations are available over newly acquired data. To accommodate these issues, we propose a Laplacian Adaptive Context-based SVM (LAC-SVM) algorithm that jointly uses four techniques to enhance classification: cross-domain learning that adapts previous classifiers learned from a source domain to classify new data in the target domain; semi-supervised learning that leverages information from unlabeled data to help training; multi-concept learning that uses concept relations to enhance individual concept detection; and active learning that improves the efficiency of manual annotation by actively querying users. Specifically, LAC-SVM adaptively applies concept classifiers and concept affinity relations computed from a source domain to classify data in the target domain, and at the same time, incrementally updates the classifiers and concept relations according to the target data. LAC-SVM can be conducted without newly labeled target data or with partially labeled target data, and in the second scenario the two-dimension active learning mechanism of selecting data-concept pairs is adopted. Experiments over three large-scale video sets show that LAC-SVM can achieve better detection accuracy with less computation compared with several state-of-the-art methods.