Regularized semi-supervised latent dirichlet allocation for visual concept learning

  • Authors:
  • Liansheng Zhuang;Lanbo She;Jingjing Huang;Jiebo Luo;Nenghai Yu

  • Affiliations:
  • MOE-MS Keynote Lab of MCC, USTC, Hefei, China and School of Information Science and Technology, USTC, Hefei, China;School of Information Science and Technology, USTC, Hefei, China;School of Information Science and Technology, USTC, Hefei, China;Kodak Research Labs, Eastman Kodak Company, Rochester, New York;MOE-MS Keynote Lab of MCC, USTC, Hefei, China and School of Information Science and Technology, USTC, Hefei, China

  • Venue:
  • MMM'11 Proceedings of the 17th international conference on Advances in multimedia modeling - Volume Part I
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Topic models are a popular tool for visual concept learning. Current topic models are either unsupervised or fully supervised. Although lots of labeled images can significantly improve the performance of topic models, they are very costly to acquire. Meanwhile, billions of unlabeled images are freely available on the internet. In this paper, to take advantage of both limited labeled training images and rich unlabeled images, we propose a novel technique called regularized Semi-supervised Latent Dirichlet Allocation (r-SSLDA) for learning visual concept classifiers. Instead of introducing a new topic model, we attempt to find an efficient way to learn topic models in a semi-supervised way. r-SSLDA considers both semi-supervised properties and supervised topic model simultaneously in a regularization framework. Experiments on Caltech 101 and Caltech 256 have shown that r-SSLDA outperforms unsupervised LDA, and achieves competitive performance against fully supervised LDA, while sharply reducing the number of labeled images required.