Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
A parallel mixture of SVMs for very large scale problems
Neural Computation
SVM that maximizes the margin in the input space
Systems and Computers in Japan
Invariance of neighborhood relation under input space to feature space mapping
Pattern Recognition Letters
Neural Networks - 2005 Special issue: IJCNN 2005
Working Set Selection Using Second Order Information for Training Support Vector Machines
The Journal of Machine Learning Research
Bounds on Error Expectation for Support Vector Machines
Neural Computation
Input space versus feature space in kernel-based methods
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Support vector machine with adaptive parameters in financial time series forecasting
IEEE Transactions on Neural Networks
Active set support vector regression
IEEE Transactions on Neural Networks
Hidden space support vector machines
IEEE Transactions on Neural Networks
Hi-index | 0.10 |
It is well-known that the separating hyperplane given by a (standard) support vector machine (SVM) is located in the middle of the margin with equal distance from the support vectors of the partitioned two clusters in the high-dimensional feature space. Whereas we expect that the corresponding separating hypersurface is also located in the middle of the margin with equal distance from the two clusters in the input sample space, in reality, it is not. We illustrate that in theory, the above ''middle-located-hypersurface'' expectation in input sample spaces is not ideally supported by SVMs. A few illustrative examples and additional experiments on large data sets are correspondingly investigated.