The nature of statistical learning theory
The nature of statistical learning theory
Making large-scale support vector machine learning practical
Advances in kernel methods
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
A fast iterative nearest point algorithm for support vector machine classifier design
IEEE Transactions on Neural Networks
Coefficient Structure of Kernel Perceptrons and Support Vector Reduction
IWINAC '07 Proceedings of the 2nd international work-conference on The Interplay Between Natural and Artificial Computation, Part I: Bio-inspired Modeling of Cognitive Tasks
Accelerating kernel perceptron learning
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
Hi-index | 0.00 |
Statistical learning theory make large margins an important property of linear classifiers and Support Vector Machines were designed with this target in mind. However, it has been shown that large margins can also be obtained when much simpler kernel perceptrons are used together with ad–hoc updating rules, different in principle from Rosenblatt’s rule. In this work we will numerically demonstrate that, rewritten in a convex update setting and using an appropriate updating vector selection procedure, Rosenblatt’s rule does indeed provide maximum margins for kernel perceptrons, although with a convergence slower than that achieved by other more sophisticated methods, such as the Schlesinger–Kozinec (SK) algorithm.