A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
Learning with non-positive kernels
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Feature Space Interpretation of SVMs with Indefinite Kernels
IEEE Transactions on Pattern Analysis and Machine Intelligence
Neural Computation
Training SVM with indefinite kernels
Proceedings of the 25th international conference on Machine learning
Learning kernels from indefinite similarities
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Learning coordination classifiers
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
Hi-index | 0.00 |
We present a generalized view of support vector machines that does not rely on a Euclidean geometric interpretation nor even positive semidefinite kernels. We base our development instead on the confidence matrix --the matrix normally determined by the direct (Hadamard) product of the kernel matrix with the label outer-product matrix. It turns out that alternative forms of confidence matrices are possible, and indeed useful. By focusing on the confidence matrix instead of the underlying kernel, we can derive an intuitive principle for optimizing example weights to yield robust classifiers. Our principle initially recovers the standard quadratic SVM training criterion, which is only convex for kernel-derived confidence measures. However, given our generalized view, we are then able to derive a principled relaxation of the SVM criterion that yields a convex upper bound. This relaxation is always convex and can be solved with a linear program. Our new training procedure obtains similar generalization performance to standard SVMs on kernel-derived confidence functions, but achieves even better results with indefinite confidence functions.