The nature of statistical learning theory
The nature of statistical learning theory
Properties of support vector machines
Neural Computation
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
SVM-KM: Speeding SVMs Learning with a priori Cluster Selection and k-Means
SBRN '00 Proceedings of the VI Brazilian Symposium on Neural Networks (SBRN'00)
Fast pattern selection for support vector classifiers
PAKDD'03 Proceedings of the 7th Pacific-Asia conference on Advances in knowledge discovery and data mining
Sample selection via clustering to construct support vector-like classifiers
IEEE Transactions on Neural Networks
Reducing examples to accelerate support vector regression
Pattern Recognition Letters
Neighborhood rough set based heterogeneous feature subset selection
Information Sciences: an International Journal
Separating hypersurfaces of SVMs in input spaces
Pattern Recognition Letters
Selecting Samples and Features for SVM Based on Neighborhood Model
RSFDGrC '07 Proceedings of the 11th International Conference on Rough Sets, Fuzzy Sets, Data Mining and Granular Computing
Similarity and Kernel Matrix Evaluation Based on Spatial Autocorrelation Analysis
ISMIS '09 Proceedings of the 18th International Symposium on Foundations of Intelligent Systems
A Competitive Learning Approach to Instance Selection for Support Vector Machines
KSEM '09 Proceedings of the 3rd International Conference on Knowledge Science, Engineering and Management
Selecting discrete and continuous features based on neighborhood decision error minimization
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Constructing sparse KFDA using pre-image reconstruction
ICONIP'10 Proceedings of the 17th international conference on Neural information processing: models and applications - Volume Part II
Fast support vector regression based on cut
ICSI'11 Proceedings of the Second international conference on Advances in swarm intelligence - Volume Part II
Satrap: data and network heterogeneity aware P2P data-mining
PAKDD'10 Proceedings of the 14th Pacific-Asia conference on Advances in Knowledge Discovery and Data Mining - Volume Part II
NMGRS: Neighborhood-based multigranulation rough sets
International Journal of Approximate Reasoning
Hi-index | 0.10 |
If the training pattern set is large, it takes a large memory and a long time to train support vector machine (SVM). Recently, we proposed neighborhood property based pattern selection algorithm (NPPS) which selects only the patterns that are likely to be near the decision boundary ahead of SVM training [Proc. of the 7th Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD), Lecture Notes in Artificial Intelligence (LNAI 2637), Seoul, Korea, pp. 376-387]. NPPS tries to identify those patterns that are likely to become support vectors in feature space. Preliminary reports show its effectiveness: SVM training time was reduced by two orders of magnitude with almost no loss in accuracy for various datasets. It has to be noted, however, that decision boundary of SVM and support vectors are all defined in feature space while NPPS described above operates in input space. If neighborhood relation in input space is not preserved in feature space, NPPS may not always be effective. In this paper, we show that the neighborhood relation is invariant under input to feature space mapping. The result assures that the patterns selected by NPPS in input space are likely to be located near decision boundary in feature space.