On the Stability and Bias-Variance Analysis of Kernel Matrix Learning

  • Authors:
  • V. Vijaya Saradhi;Harish Karnick

  • Affiliations:
  • Dept. of Computer Science and Engineering, Indian Institute of Technology Kanpur, India;Dept. of Computer Science and Engineering, Indian Institute of Technology Kanpur, India

  • Venue:
  • CAI '07 Proceedings of the 20th conference of the Canadian Society for Computational Studies of Intelligence on Advances in Artificial Intelligence
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Stability and bias-variance analysis are two powerful tools to understand learning algorithms better. We use these tools to analyze learning the kernel matrix (LKM) algorithm. The motivation comes from: (i) LKM works in the transductive setting where both training and test data points are to be given apriori. Hence, it is worth knowing the stability of LKM under small variations in the data set and (ii) It has been argued that LKMs overfit the given data set. In particular we are interested in answering the following questions: (a) Is LKM a stable algorithm? (b) do they overfit(c) what is the bias behavior with different optimal kernels?. Our experimental results show that LKMs do not overfit the given data set. The stability analysis reveals that LKMs are unstablealgorithms.