The nature of statistical learning theory
The nature of statistical learning theory
Constructing Boosting Algorithms from SVMs: An Application to One-Class Classification
IEEE Transactions on Pattern Analysis and Machine Intelligence
An introduction to boosting and leveraging
Advanced lectures on machine learning
An efficient boosting algorithm for combining preferences
The Journal of Machine Learning Research
New approaches to support vector ordinal regression
ICML '05 Proceedings of the 22nd international conference on Machine learning
Neural Computation
Infinite ensemble learning with support vector machines
ECML'05 Proceedings of the 16th European conference on Machine Learning
Margin-Based ranking meets boosting in the middle
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Learning Rule Ensembles for Ordinal Classification with Monotonicity Constraints
Fundamenta Informaticae - Fundamentals of Knowledge Technology
Block-quantized support vector ordinal regression
IEEE Transactions on Neural Networks
Cascade generalisation for ordinal problems
International Journal of Artificial Intelligence and Soft Computing
Ordinal classification with decision rules
MCD'07 Proceedings of the 3rd ECML/PKDD international conference on Mining complex data
Cost-sensitive supported vector learning to rank imbalanced data set
ICIC'09 Proceedings of the Intelligent computing 5th international conference on Emerging intelligent computing technology and applications
Learning Rule Ensembles for Ordinal Classification with Monotonicity Constraints
Fundamenta Informaticae - Fundamentals of Knowledge Technology
Hi-index | 0.00 |
We propose a thresholded ensemble model for ordinal regression problems. The model consists of a weighted ensemble of confidence functions and an ordered vector of thresholds. We derive novel large-margin bounds of common error functions, such as the classification error and the absolute error. In addition to some existing algorithms, we also study two novel boosting approaches for constructing thresholded ensembles. Both our approaches not only are simpler than existing algorithms, but also have a stronger connection to the large-margin bounds. In addition, they have comparable performance to SVM-based algorithms, but enjoy the benefit of faster training. Experimental results on benchmark datasets demonstrate the usefulness of our boosting approaches.