Machine Learning
Least Squares Support Vector Machine Classifiers
Neural Processing Letters
Proximal support vector machine classifiers
Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining
Learning from Data: Concepts, Theory, and Methods
Learning from Data: Concepts, Theory, and Methods
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
Feedforward Neural Network Construction Using Cross Validation
Neural Computation
Support vector machines for dyadic data
Neural Computation
Hi-index | 0.00 |
Support vector machine (SVM) classifiers attempt to find a maximum margin hyperplane by solving a convex optimization problem. The conventional SVM approach involves the minimization of a quadratic function subject to linear inequality constraints. However, the margin is not scale invariant, and therefore a linear transformation of the data tends to affect the classification accuracy. Recently, potential SVMs attempted to address the issue of scale variance by using an appropriate scaling to improve the classification accuracy. In this paper, we propose a novel SVM formulation that is in the spirit of potential SVM, but requires a single matrix inversion to find the classifier. Experimental results bear out the efficacy of the classifier.