Kernel-Based Transductive Learning with Nearest Neighbors

  • Authors:
  • Liangcai Shu;Jinhui Wu;Lei Yu;Weiyi Meng

  • Affiliations:
  • Dept. of Computer Science, SUNY at Binghamton, Binghamton, New York, U.S.A. 13902;Dept. of Computer Science, SUNY at Binghamton, Binghamton, New York, U.S.A. 13902;Dept. of Computer Science, SUNY at Binghamton, Binghamton, New York, U.S.A. 13902;Dept. of Computer Science, SUNY at Binghamton, Binghamton, New York, U.S.A. 13902

  • Venue:
  • APWeb/WAIM '09 Proceedings of the Joint International Conferences on Advances in Data and Web Management
  • Year:
  • 2009

Quantified Score

Hi-index 0.01

Visualization

Abstract

In the k -nearest neighbor (KNN) classifier, nearest neighbors involve only labeled data. That makes it inappropriate for the data set that includes very few labeled data. In this paper, we aim to solve the classification problem by applying transduction to the KNN algorithm. We consider two groups of nearest neighbors for each data point -- one from labeled data, and the other from unlabeled data. A kernel function is used to assign weights to neighbors. We derive the recurrence relation of neighboring data points, and then present two solutions to the classification problem. One solution is to solve it by matrix computation for small or medium-size data sets. The other is an iterative algorithm for large data sets, and in the iterative process an energy function is minimized. Experiments show that our solutions achieve high performance and our iterative algorithm converges quickly.