Tighter perceptron with improved dual use of cached data for model representation and validation

  • Authors:
  • Zhuang Wang;Slobodan Vucetic

  • Affiliations:
  • Center for Information Science and Technology, Department of Computer and Information Sciences, Temple University, Philadelphia, PA;Center for Information Science and Technology, Department of Computer and Information Sciences, Temple University, Philadelphia, PA

  • Venue:
  • IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Kernel Perceptrons are represented by a subset of training points, called the support vectors, and their associated weights. To address the issue of unlimited growth in model size during training, budget kernel perceptrons maintain the fixed number of support vectors and thus achieve the constant update time and space complexity. In this paper, a new kernel perceptron algorithm for online learning on a budget is proposed. Following the idea of Tighter Perceptron, upon exceeding the budget, the algorithm removes the support vector with the minimal impact on classification accuracy. To optimize memory use, instead on maintaining a separate validation data set for accuracy estimation, the proposed algorithm only uses the support vectors for both model representation and validation. This is achieved by estimating posterior class probability of each support vector and using this information in validation. The experimental results on 11 benchmark data sets indicate that the proposed algorithm is significantly more accurate than the competing budget kernel perceptrons and that it has comparable accuracy to the resource unbounded perceptrons, including the original kernel perceptron and the Tighter Perceptron that uses whole training data set for validation.