Communications of the ACM
A new polynomial-time algorithm for linear programming
Combinatorica
Computational limitations on learning from examples
Journal of the ACM (JACM)
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
Decision theoretic generalizations of the PAC model for neural net and other learning applications
Information and Computation
Efficient distribution-free learning of probabilistic concepts
Journal of Computer and System Sciences - Special issue: 31st IEEE conference on foundations of computer science, Oct. 22–24, 1990
Toward Efficient Agnostic Learning
Machine Learning - Special issue on computational learning theory, COLT'92
Fat-shattering and the learnability of real-valued functions
Journal of Computer and System Sciences
Scale-sensitive dimensions, uniform convergence, and learnability
Journal of the ACM (JACM)
Learning in Neural Networks: Theoretical Foundations
Learning in Neural Networks: Theoretical Foundations
The Design and Analysis of Computer Algorithms
The Design and Analysis of Computer Algorithms
An efficient boosting algorithm for combining preferences
The Journal of Machine Learning Research
Generalization Bounds for the Area Under the ROC Curve
The Journal of Machine Learning Research
A study of the bipartite ranking problem in machine learning
A study of the bipartite ranking problem in machine learning
Journal of Artificial Intelligence Research
On learning linear ranking functions for beam search
Proceedings of the 24th international conference on Machine learning
An Unsupervised Learning Algorithm for Rank Aggregation
ECML '07 Proceedings of the 18th European conference on Machine Learning
Learning Linear Ranking Functions for Beam Search with Application to Planning
The Journal of Machine Learning Research
Ordinal extreme learning machine
Neurocomputing
Learning to rank using 1-norm regularization and convex hull reduction
Proceedings of the 48th Annual Southeast Regional Conference
Subset ranking using regression
COLT'06 Proceedings of the 19th annual conference on Learning Theory
Stability and generalization of bipartite ranking algorithms
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Uniform convergence, stability and learnability for ranking problems
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.00 |
The problem of ranking, in which the goal is to learn a real-valued ranking function that induces a ranking or ordering over an instance space, has recently gained attention in machine learning. We define a model of learnability for ranking functions in a particular setting of the ranking problem known as the bipartite ranking problem, and derive a number of results in this model. Our first main result provides a sufficient condition for the learnability of a class of ranking functions ${\mathcal F}$: we show that ${\mathcal F}$ is learnable if its bipartite rank-shatter coefficients, which measure the richness of a ranking function class in the same way as do the standard VC-dimension related shatter coefficients (growth function) for classes of classification functions, do not grow too quickly. Our second main result gives a necessary condition for learnability: we define a new combinatorial parameter for a class of ranking functions ${\mathcal F}$ that we term the rank dimension of ${\mathcal F}$, and show that ${\mathcal F}$ is learnable only if its rank dimension is finite. Finally, we investigate questions of the computational complexity of learning ranking functions.