The nature of statistical learning theory
The nature of statistical learning theory
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
An Algorithm for Finding Best Matches in Logarithmic Expected Time
ACM Transactions on Mathematical Software (TOMS)
Proximal support vector machine classifiers
Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
Similarity Search in High Dimensions via Hashing
VLDB '99 Proceedings of the 25th International Conference on Very Large Data Bases
Applications of Support Vector Machines for Pattern Recognition: A Survey
SVM '02 Proceedings of the First International Workshop on Pattern Recognition with Support Vector Machines
Training Support Vector Machines: an Application to Face Detection
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
SVM-KM: Speeding SVMs Learning with a priori Cluster Selection and k-Means
SBRN '00 Proceedings of the VI Brazilian Symposium on Neural Networks (SBRN'00)
Novelty detection: a review—part 1: statistical approaches
Signal Processing
Core Vector Machines: Fast SVM Training on Very Large Data Sets
The Journal of Machine Learning Research
Concept boundary detection for speeding up SVMs
ICML '06 Proceedings of the 23rd international conference on Machine learning
Neighborhood Property--Based Pattern Selection for Support Vector Machines
Neural Computation
Cutting-plane training of structural SVMs
Machine Learning
Efficient nearest neighbor query based on extended B+-tree in high-dimensional space
Pattern Recognition Letters
A fast iterative nearest point algorithm for support vector machine classifier design
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
For data pre-processing of SVMs, many scholars tried to find those samples, which would become support vectors. Generally, support vectors locate in the overlap regions, which are between different classes. But overlap region does not always exist. In this paper, a new method is proposed to find the boundary regions of each class instead of overlap regions. This method could deal with the dataset without overlap regions. Summing the cosine of the sample-neighbor angle, the sum ranges from 0 to k. When the sample locates in the boundary region of data distribution, the sum would be close to k; when the sample locates in the interior of the data distribution, the sum would be close to 0. Using cosine sum, the samples locating in the interior of each class can be disposed before SVMs training. Experimental results show that the proposed method can solve the problem, which the methods based on finding overlap regions cannot deal with.