Semisupervised kernel matrix learning by kernel propagation

  • Authors:
  • Enliang Hu;Songcan Chen;Daoqiang Zhang;Xuesong Yin

  • Affiliations:
  • Department of Mathematics, Yunnan Normal University, Kunming, China;Department of Computer Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China;Department of Computer Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China;School of Information and Engineering, Zhejiang Radio and TV University, Hangzhou, China

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

The goal of semisupervised kernel matrix learning (SS-KML) is to learn a kernel matrix on all the given samples on which just a little supervised information, such as class label or pairwise constraint, is provided. Despite extensive research, the performance of SS-KML still leaves some space for improvement in terms of effectiveness and efficiency. For example, a recent pairwise constraints propagation (PCP) algorithm has formulated SS-KML into a semidefinite programming (SDP) problem, but its computation is very expensive, which undoubtedly restricts PCPs scalability in practice. In this paper, a novel algorithm, called kernel propagation (KP), is proposed to improve the comprehensive performance in SS-KML. The main idea of KP is first to learn a small-sized sub-kernel matrix (named seed-kernel matrix) and then propagate it into a larger-sized full-kernel matrix. Specifically, the implementation of KP consists of three stages: 1) separate the supervised sample (sub)set χl from the full sample set χ 2) learn a seed-kernel matrix on χl through solving a small-scale SDP problem; and 3) propagate the learnt seed-kernel matrix into a full-kernel matrix on χ. Furthermore, following the idea in KP, we naturally develop two conveniently realizable out-of-sample extensions for KML: one is batch-style extension, and the other is online-style extension. The experiments demonstrate that KP is encouraging in both effectiveness and efficiency compared with three state-of-the-art algorithms and its related out-of-sample extensions are promising too.