Per-sample multiple kernel approach for visual concept learning

  • Authors:
  • Jingjing Yang;Yuanning Li;Yonghong Tian;Ling-Yu Duan;Wen Gao

  • Affiliations:
  • Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China and National Engineering Laboratory for Video Technology, School of EE & CS, Peking University, Beijing, China an ...;Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China and National Engineering Laboratory for Video Technology, School of EE & CS, Peking University, Beijing, China an ...;National Engineering Laboratory for Video Technology, School of EE & CS, Peking University, Beijing, China;National Engineering Laboratory for Video Technology, School of EE & CS, Peking University, Beijing, China;Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China and National Engineering Laboratory for Video Technology, School of EE & CS, Peking University, Beijing, China

  • Venue:
  • Journal on Image and Video Processing - Special issue on selected papers from multimedia modeling conference 2009
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL) methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL) approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.