A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
MadaBoost: A Modification of AdaBoost
COLT '00 Proceedings of the Thirteenth Annual Conference on Computational Learning Theory
Optimizing search engines using clickthrough data
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
Optimally-smooth adaptive boosting and application to agnostic learning
The Journal of Machine Learning Research
Smooth boosting and learning with malicious noise
The Journal of Machine Learning Research
An efficient boosting algorithm for combining preferences
The Journal of Machine Learning Research
Neural Computation
Efficient Margin Maximizing with Boosting
The Journal of Machine Learning Research
A Fast Algorithm for Learning a Ranking Function from Large-Scale Data Sets
IEEE Transactions on Pattern Analysis and Machine Intelligence
Smooth Boosting for Margin-Based Ranking
ALT '08 Proceedings of the 19th international conference on Algorithmic Learning Theory
Journal of Artificial Intelligence Research
Margin-based Ranking and an Equivalence between AdaBoost and RankBoost
The Journal of Machine Learning Research
The P-Norm Push: A Simple Convex Ranking Algorithm that Concentrates at the Top of the List
The Journal of Machine Learning Research
Robust reductions from ranking to classification
COLT'07 Proceedings of the 20th annual conference on Learning theory
Smooth boosting using an information-based criterion
ALT'06 Proceedings of the 17th international conference on Algorithmic Learning Theory
COLT'06 Proceedings of the 19th annual conference on Learning Theory
Hi-index | 0.00 |
Finding linear classifiers that maximize AUC scores is important in ranking research. This is naturally formulated as a 1-norm hard/soft margin optimization problem over pn pairs of p positive and n negative instances. However, directly solving the optimization problems is impractical since the problem size (pn) is quadratically larger than the given sample size (p + n). In this paper, we give (approximate) reductions from the problems to hard/soft margin optimization problems of linear size. First, for the hard margin case, we show that the problem is reduced to a hard margin optimization problem over p + n instances in which the bias constant term is to be optimized. Then, for the soft margin case, we show that the problem is approximately reduced to a soft margin optimization problem over p + n instances for which the resulting linear classifier is guaranteed to have a certain margin over pairs.