Sparse regularization for semi-supervised classification

  • Authors:
  • Mingyu Fan;Nannan Gu;Hong Qiao;Bo Zhang

  • Affiliations:
  • LSEC and Institute of Applied Mathematics, AMSS, Chinese Academy of Sciences, Beijing 100190, China;Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China;Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China;LSEC and Institute of Applied Mathematics, AMSS, Chinese Academy of Sciences, Beijing 100190, China

  • Venue:
  • Pattern Recognition
  • Year:
  • 2011

Quantified Score

Hi-index 0.01

Visualization

Abstract

Manifold regularization (MR) is a promising regularization framework for semi-supervised learning, which introduces an additional penalty term to regularize the smoothness of functions on data manifolds and has been shown very effective in exploiting the underlying geometric structure of data for classification. It has been shown that the performance of the MR algorithms depends highly on the design of the additional penalty term on manifolds. In this paper, we propose a new approach to define the penalty term on manifolds by the sparse representations instead of the adjacency graphs of data. The process to build this novel penalty term has two steps. First, the best sparse linear reconstruction coefficients for each data point are computed by the l^1-norm minimization. Secondly, the learner is subject to a cost function which aims to preserve the sparse coefficients. The cost function is utilized as the new penalty term for regularization algorithms. Compared with previous semi-supervised learning algorithms, the new penalty term needs less input parameters and has strong discriminative power for classification. The least square classifier using our novel penalty term is proposed in this paper, which is called the Sparse Regularized Least Square Classification (S-RLSC) algorithm. Experiments on real-world data sets show that our algorithm is very effective.