Machine Learning
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
IR evaluation methods for retrieving highly relevant documents
SIGIR '00 Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval
Logistic Regression, AdaBoost and Bregman Distances
Machine Learning
On the algorithmic implementation of multiclass kernel-based vector machines
The Journal of Machine Learning Research
An efficient boosting algorithm for combining preferences
The Journal of Machine Learning Research
Generalization Bounds for the Area Under the ROC Curve
The Journal of Machine Learning Research
Large Margin Methods for Structured and Interdependent Output Variables
The Journal of Machine Learning Research
ROC confidence bands: an empirical evaluation
ICML '05 Proceedings of the 22nd international conference on Machine learning
Efficient Learning of Label Ranking by Soft Projections onto Polyhedra
The Journal of Machine Learning Research
The Journal of Machine Learning Research
Re-ranking algorithms for name tagging
CHSLP '06 Proceedings of the Workshop on Computationally Hard Problems and Joint Inference in Speech and Language Processing
Margin-based Ranking and an Equivalence between AdaBoost and RankBoost
The Journal of Machine Learning Research
COLT'06 Proceedings of the 19th annual conference on Learning Theory
Subset ranking using regression
COLT'06 Proceedings of the 19th annual conference on Learning Theory
Margin-Based ranking meets boosting in the middle
COLT'05 Proceedings of the 18th annual conference on Learning Theory
CICLing '09 Proceedings of the 10th International Conference on Computational Linguistics and Intelligent Text Processing
Margin-based Ranking and an Equivalence between AdaBoost and RankBoost
The Journal of Machine Learning Research
Ranking from pairs and triplets: information quality, evaluation methods and query complexity
Proceedings of the fourth ACM international conference on Web search and data mining
Approximate reduction from AUC maximization to 1-norm soft margin optimization
ALT'11 Proceedings of the 22nd international conference on Algorithmic learning theory
On Equivalence Relationships Between Classification and Ranking Algorithms
The Journal of Machine Learning Research
Efficient rank aggregation using partial data
Proceedings of the 12th ACM SIGMETRICS/PERFORMANCE joint international conference on Measurement and Modeling of Computer Systems
Generic subset ranking using binary classifiers
Theoretical Computer Science
Full length article: The convergence rate of a regularized ranking algorithm
Journal of Approximation Theory
Information Sciences: an International Journal
Direct optimization of ranking measures for learning to rank models
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
Uniform convergence, stability and learnability for ranking problems
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Editorial: Preference learning and ranking
Machine Learning
Machine Learning
Robust ordinal regression in preference learning and ranking
Machine Learning
Hi-index | 0.00 |
We are interested in supervised ranking algorithms that perform especially well near the top of the ranked list, and are only required to perform sufficiently well on the rest of the list. In this work, we provide a general form of convex objective that gives high-scoring examples more importance. This "push" near the top of the list can be chosen arbitrarily large or small, based on the preference of the user. We choose lp-norms to provide a specific type of push; if the user sets p larger, the objective concentrates harder on the top of the list. We derive a generalization bound based on the p-norm objective, working around the natural asymmetry of the problem. We then derive a boosting-style algorithm for the problem of ranking with a push at the top. The usefulness of the algorithm is illustrated through experiments on repository data. We prove that the minimizer of the algorithm's objective is unique in a specific sense. Furthermore, we illustrate how our objective is related to quality measurements for information retrieval.