Classification by pairwise coupling
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Reducing multiclass to binary: a unifying approach for margin classifiers
The Journal of Machine Learning Research
The Journal of Machine Learning Research
A family of additive online algorithms for category ranking
The Journal of Machine Learning Research
Support vector machine learning for interdependent and structured output spaces
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Probability Estimates for Multi-class Classification by Pairwise Coupling
The Journal of Machine Learning Research
Ordering by weighted number of wins gives a good ranking for weighted tournaments
SODA '06 Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm
SIAM Journal on Discrete Mathematics
Intelligent Data Analysis
Label ranking by learning pairwise preferences
Artificial Intelligence
Learning label preferences: ranking error versus position error
IDA'05 Proceedings of the 6th international conference on Advances in Intelligent Data Analysis
Preferences in AI: An overview
Artificial Intelligence
Learning from label preferences
DS'11 Proceedings of the 14th international conference on Discovery science
Multilayer perceptron for label ranking
ICANN'12 Proceedings of the 22nd international conference on Artificial Neural Networks and Machine Learning - Volume Part II
The Impacts of Structural Difference and Temporality of Tweets on Retrieval Effectiveness
ACM Transactions on Information Systems (TOIS)
Hi-index | 0.01 |
We study the problem of label ranking, a machine learning task that consists of inducing a mapping from instances to rankings over a finite number of labels. Our learning method, referred to as ranking by pairwise comparison (RPC), first induces pairwise order relations (preferences) from suitable training data, using a natural extension of so-called pairwise classification. A ranking is then derived from a set of such relations by means of a ranking procedure. In this paper, we first elaborate on a key advantage of such a decomposition, namely the fact that it allows the learner to adapt to different loss functions without re-training, by using different ranking procedures on the same predicted order relations. In this regard, we distinguish between two types of errors, called, respectively, ranking error and position error. Focusing on the position error, which has received less attention so far, we then propose a ranking procedure called ranking through iterated choice as well as an efficient pairwise implementation thereof. Apart from a theoretical justification of this procedure, we offer empirical evidence in favor of its superior performance as a risk minimizer for the position error.