Refining kernel matching pursuit

  • Authors:
  • Jianwu Li;Yao Lu

  • Affiliations:
  • Beijing Key Lab of Intelligent Information Technology, School of Computer, Beijing Institute of Technology, Beijing, China;Beijing Key Lab of Intelligent Information Technology, School of Computer, Beijing Institute of Technology, Beijing, China

  • Venue:
  • ISNN'10 Proceedings of the 7th international conference on Advances in Neural Networks - Volume Part II
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Kernel matching pursuit (KMP), as a greedy machine learning algorithm, appends iteratively functions from a kernel-based dictionary to its solution An obvious problem is that all kernel functions in dictionary will keep unchanged during the whole process of appending It is difficult, however, to determine the optimal dictionary of kernel functions ahead of training, without enough prior knowledge This paper proposes to further refine the results obtained by KMP, through adjusting all parameters simultaneously in the solutions Three optimization methods including gradient descent (GD), simulated annealing (SA), and particle swarm optimization (PSO), are used to perform the refining procedure Their performances are also analyzed and evaluated, according to experimental results based on UCI benchmark datasets.