Scalable training with approximate incremental laplacian eigenmaps and PCA

  • Authors:
  • Eleni Mantziou;Symeon Papadopoulos;Yiannis Kompatsiaris

  • Affiliations:
  • Center for Research and Technology Hellas (CERTH), Thessaloniki, Greece;Center for Research and Technology Hellas (CERTH), Thessaloniki, Greece;Center for Research and Technology Hellas (CERTH), Thessaloniki, Greece

  • Venue:
  • Proceedings of the 21st ACM international conference on Multimedia
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

The paper describes the approach, the experimental settings, and the results obtained by the proposed methodology at the ACM Yahoo! Multimedia Grand Challenge. Its main contribution is the use of fast and efficient features with a highly scalable semi-supervised learning approach, the Approximate Laplacian Eigenmaps (ALEs), and its extension, by computing the test set incrementally for learning concepts in time linear to the number of images (both labelled and unlabelled). A combination of two local visual features combined with the VLAD feature aggregation method and PCA is used to improve the efficiency and time complexity. Our methodology achieves somewhat better accuracy compared to the baseline (linear SVM) in small training sets, but improves the performance as the training data increase. Performing ALE fusion on a training set of 50K/concept resulted in a MiAP score of 0.4223, which was among the highest scores of the proposed approach.