Intelligent data analysis
Online Choice of Active Learning Algorithms
The Journal of Machine Learning Research
A New Fuzzy Support Vector Machine Based on the Weighted Margin
Neural Processing Letters
Explanation-Augmented SVM: an approach to incorporating domain knowledge into SVM learning
ICML '05 Proceedings of the 22nd international conference on Machine learning
Estimating the Support of a High-Dimensional Distribution
Neural Computation
The theoretical analysis of FDA and applications
Pattern Recognition
On the Equivalence of the SMO and MDM Algorithms for SVM Training
ECML PKDD '08 Proceedings of the 2008 European Conference on Machine Learning and Knowledge Discovery in Databases - Part I
Simple Clipping Algorithms for Reduced Convex Hull SVM Training
HAIS '08 Proceedings of the 3rd international workshop on Hybrid Artificial Intelligence Systems
Multiple-Instance Active Learning for Image Categorization
MMM '09 Proceedings of the 15th International Multimedia Modeling Conference on Advances in Multimedia Modeling
A Simple Proof of the Convergence of the SMO Algorithm for Linearly Separable Problems
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part I
Soft fuzzy rough sets for robust feature evaluation and selection
Information Sciences: an International Journal
Tree Decomposition for Large-Scale SVM Problems
The Journal of Machine Learning Research
Robust fuzzy rough classifiers
Fuzzy Sets and Systems
Hi-index | 754.84 |
Generalization bounds depending on the margin of a classifier are a relatively new development. They provide an explanation of the performance of state-of-the-art learning systems such as support vector machines (SVMs) and Adaboost. The difficulty with these bounds has been either their lack of robustness or their looseness. The question of whether the generalization of a classifier can be more tightly bounded in terms of a robust measure of the distribution of margin values has remained open for some time. The paper answers this open question in the affirmative and, furthermore, the analysis leads to bounds that motivate the previously heuristic soft margin SVM algorithms as well as justifying the use of the quadratic loss in neural network training algorithms. The results are extended to give bounds for the probability of failing to achieve a target accuracy in regression prediction, with a statistical analysis of ridge regression and Gaussian processes as a special case. The analysis presented in the paper has also lead to new boosting algorithms described elsewhere.