An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
Choosing Multiple Parameters for Support Vector Machines
Machine Learning
Leave-one-out bounds for kernel methods
Neural Computation
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
Bias-Variance Analysis of Support Vector Machines for the Development of SVM-Based Ensemble Methods
The Journal of Machine Learning Research
Learning the Kernel with Hyperkernels
The Journal of Machine Learning Research
Demonstrating the stability of support vector machines for classification
Signal Processing - Signal processing in UWB communications
Bounds on Error Expectation for Support Vector Machines
Neural Computation
Hi-index | 0.00 |
Stability and bias-variance analysis are two powerful tools to understand learning algorithms better. We use these tools to analyze learning the kernel matrix (LKM) algorithm. The motivation comes from: (i) LKM works in the transductive setting where both training and test data points are to be given apriori. Hence, it is worth knowing the stability of LKM under small variations in the data set and (ii) It has been argued that LKMs overfit the given data set. In particular we are interested in answering the following questions: (a) Is LKM a stable algorithm? (b) do they overfit(c) what is the bias behavior with different optimal kernels?. Our experimental results show that LKMs do not overfit the given data set. The stability analysis reveals that LKMs are unstablealgorithms.