Information Processing Letters
The nature of statistical learning theory
The nature of statistical learning theory
Interior point algorithms: theory and analysis
Interior point algorithms: theory and analysis
Advances in kernel methods: support vector learning
Advances in kernel methods: support vector learning
Parsimonious Least Norm Approximation
Computational Optimization and Applications
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Data selection for support vector machine classifiers
Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining
Robust Linear and Support Vector Regression
IEEE Transactions on Pattern Analysis and Machine Intelligence
Proximal support vector machine classifiers
Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining
Machine Learning
Learning from Data: Concepts, Theory, and Methods
Learning from Data: Concepts, Theory, and Methods
Advances in Large Margin Classifiers
Advances in Large Margin Classifiers
Large Scale Kernel Regression via Linear Programming
Machine Learning
Feature Selection via Concave Minimization and Support Vector Machines
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Sparse Greedy Matrix Approximation for Machine Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Feature Selection Via Mathematical Programming
INFORMS Journal on Computing
Lagrangian support vector machines
The Journal of Machine Learning Research
Building Projectable Classifiers of Arbitrary Complexity
ICPR '96 Proceedings of the 13th International Conference on Pattern Recognition - Volume 2
Simpler knowledge-based support vector machines
ICML '06 Proceedings of the 23rd international conference on Machine learning
Learning sparse metrics via linear programming
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
Nonparametric Quantile Estimation
The Journal of Machine Learning Research
Minimum reference set based feature selection for small sample classifications
Proceedings of the 24th international conference on Machine learning
A Multi-criteria Convex Quadratic Programming model for credit data analysis
Decision Support Systems
Arbitrary norm support vector machines
Neural Computation
An effective method of pruning support vector machine classifiers
IEEE Transactions on Neural Networks
Sparse learning for support vector classification
Pattern Recognition Letters
Proceedings of the fourth ACM conference on Recommender systems
Feature selection via dependence maximization
The Journal of Machine Learning Research
A sequential algorithm for sparse support vector classifiers
Pattern Recognition
IWANN'13 Proceedings of the 12th international conference on Artificial Neural Networks: advances in computational intelligence - Volume Part I
Kernel regression with sparse metric learning
Journal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology
Hi-index | 0.00 |
A finite concave minimization algorithm is proposed for constructing kernel classifiers that use a minimal number of data points both in generating and characterizing a classifier. The algorithm is theoretically justified on the basis of linear programming perturbation theory and a leave-one-out error bound as well as effective computational results on seven real world datasets. A nonlinear rectangular kernel is generated by systematically utilizing as few of the data as possible both in training and in characterizing a nonlinear separating surface. This can result in substantial reduction in kernel data-dependence (over 94% in six of the seven public datasets tested on) and with test set correctness equal to that obtained by using a conventional support vector machine classifier that depends on many more data points. This reduction in data dependence results in a much faster classifier that requires less storage. To eliminate data points, the proposed approach makes use of a novel loss function, the "pound" function (·)#, which is a linear combination of the 1-norm and the step function that measures both the magnitude and the presence of any error.