Communications of the ACM
An introduction to computational learning theory
An introduction to computational learning theory
The nature of statistical learning theory
The nature of statistical learning theory
Generalization performance of support vector machines and other pattern classifiers
Advances in kernel methods
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Large Margin Classification Using the Perceptron Algorithm
Machine Learning - The Eleventh Annual Conference on computational Learning Theory
Improved Boosting Algorithms Using Confidence-rated Predictions
Machine Learning - The Eleventh Annual Conference on computational Learning Theory
Machine Learning
A Simple Approach to Ordinal Classification
EMCL '01 Proceedings of the 12th European Conference on Machine Learning
Constraint Classification: A New Approach to Multiclass Classification
ALT '02 Proceedings of the 13th International Conference on Algorithmic Learning Theory
An introduction to boosting and leveraging
Advanced lectures on machine learning
Cost-Sensitive Learning by Cost-Proportionate Example Weighting
ICDM '03 Proceedings of the Third IEEE International Conference on Data Mining
An iterative method for multi-class cost-sensitive learning
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Gaussian Processes for Ordinal Regression
The Journal of Machine Learning Research
Error limiting reductions between classification tasks
ICML '05 Proceedings of the 22nd international conference on Machine learning
Neural Computation
Training linear SVMs in linear time
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
Support Vector Ordinal Regression
Neural Computation
Learning to rank: from pairwise approach to listwise approach
Proceedings of the 24th international conference on Machine learning
Learning to Classify Ordinal Data: The Data Replication Method
The Journal of Machine Learning Research
Support Vector Machinery for Infinite Ensemble Learning
The Journal of Machine Learning Research
Dominance-Based Rough Set Approach to Reasoning About Ordinal Data
RSEISP '07 Proceedings of the international conference on Rough Sets and Intelligent Systems Paradigms
Generalization Bounds for Some Ordinal Regression Algorithms
ALT '08 Proceedings of the 19th international conference on Algorithmic Learning Theory
Rule learning with monotonicity constraints
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Learning to Rank for Information Retrieval
Foundations and Trends in Information Retrieval
Robust reductions from ranking to classification
COLT'07 Proceedings of the 20th annual conference on Learning theory
Ordinal classification with decision rules
MCD'07 Proceedings of the 3rd ECML/PKDD international conference on Mining complex data
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Large-Margin thresholded ensembles for ordinal regression: theory and practice
ALT'06 Proceedings of the 17th international conference on Algorithmic Learning Theory
Ordinal hyperplanes ranker with cost sensitivities for age estimation
CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
Statistical Analysis of Bayes Optimal Subset Ranking
IEEE Transactions on Information Theory
Adaptive metric learning vector quantization for ordinal classification
Neural Computation
Exploitation of pairwise class distances for ordinal classification
Neural Computation
An organ allocation system for liver transplantation based on ordinal regression
Applied Soft Computing
Improving ranking performance with cost-sensitive ordinal classification via regression
Information Retrieval
Hi-index | 0.00 |
We present a reduction framework from ordinal ranking to binary classification. The framework consists of three steps: extracting extended examples from the original examples, learning a binary classifier on the extended examples with any binary classification algorithm, and constructing a ranker from the binary classifier. Based on the framework, we show that a weighted 0/1 loss of the binary classifier upper-bounds the mislabeling cost of the ranker, both error-wise and regret-wise. Our framework allows not only the design of good ordinal ranking algorithms based on well-tuned binary classification approaches, but also the derivation of new generalization bounds for ordinal ranking from known bounds for binary classification. In addition, our framework unifies many existing ordinal ranking algorithms, such as perceptron ranking and support vector ordinal regression. When compared empirically on benchmark data sets, some of our newly designed algorithms enjoy advantages in terms of both training speed and generalization performance over existing algorithms. In addition, the newly designed algorithms lead to better cost-sensitive ordinal ranking performance, as well as improved listwise ranking performance.