Semi-supervised learning via sparse model

  • Authors:
  • Yu Wang;Sheng Tang;Yan-Tao Zheng;Yong-Dong Zhang;Jin-Tao Li

  • Affiliations:
  • -;-;-;-;-

  • Venue:
  • Neurocomputing
  • Year:
  • 2014

Quantified Score

Hi-index 0.01

Visualization

Abstract

Graph-based Semi-Supervised Learning (SSL) methods are the widely used SSL methods due to their high accuracy. They can well meet the manifold assumption with high computational cost, but don't meet the cluster assumption. In this paper, we propose a Semi-supervised learning via SPArse (SSPA) model. Since SSPA uses sparse matrix multiplication to depict the adjacency relations among samples, SSPA can approximate low dimensional manifold structure of samples with lower computational complexity than these graph-based SSL methods. Each column of this sparse matrix corresponds to one sparse representation of a sample. The rational is that the inner product of sparse representations can also be sparse under certain constraint. Since the dictionary in the SSPA model can depict the distribution of the entire samples, the sparse representation of a sample encodes its spatial location information. Therefore, in the SSPA model the manifold structure of samples is computed via their locations in the intrinsic geometry of the distribution instead of their feature vectors. In order to meet the cluster assumption, we propose an structured dictionary learning algorithm to explicitly reveal the cluster structure of the dictionary. We develop the SSPA algorithms with the structured dictionary besides non-structured one, and experiments show that our methods are efficient and outperform state-of-the-art graph-based SSL methods.