Computing with unreliable information
STOC '90 Proceedings of the twenty-second annual ACM symposium on Theory of computing
Training connectionist networks with queries and selective sampling
Advances in neural information processing systems 2
Improving Generalization with Active Learning
Machine Learning - Special issue on structured connectionist systems
Selective Sampling Using the Query by Committee Algorithm
Machine Learning
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
The importance of being biased
STOC '02 Proceedings of the thiry-fourth annual ACM symposium on Theory of computing
Query by committee, linear separation and random walks
Theoretical Computer Science
Optimizing search engines using clickthrough data
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
Selective Sampling for Nearest Neighbor Classifiers
Machine Learning
Theoretical Computer Science - Special issue: Algorithmic learning theory
Online Choice of Active Learning Algorithms
The Journal of Machine Learning Research
SIAM Journal on Discrete Mathematics
Active learning via transductive experimental design
ICML '06 Proceedings of the 23rd international conference on Machine learning
Proceedings of the thirty-ninth annual ACM symposium on Theory of computing
A bound on the label complexity of agnostic active learning
Proceedings of the 24th international conference on Machine learning
Estimating the distance to a monotone function
Random Structures & Algorithms
Noisy sorting without resampling
Proceedings of the nineteenth annual ACM-SIAM symposium on Discrete algorithms
Distribution-Free Property-Testing
SIAM Journal on Computing
Repairing self-confident active-transductive learners using systematic exploration
Pattern Recognition Letters
Robust reductions from ranking to classification
Machine Learning
Property-Preserving Data Reconstruction
Algorithmica
Aggregating inconsistent information: Ranking and clustering
Journal of the ACM (JACM)
Label ranking by learning pairwise preferences
Artificial Intelligence
Journal of Computer and System Sciences
Analysis of Perceptron-Based Active Learning
The Journal of Machine Learning Research
Sorting and Selection with Imprecise Comparisons
ICALP '09 Proceedings of the 36th International Colloquium on Automata, Languages and Programming: Part I
Reducing labeling effort for structured prediction tasks
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 2
Here or there: preference judgments for relevance
ECIR'08 Proceedings of the IR research, 30th European conference on Advances in information retrieval
On the Foundations of Noise-free Selective Classification
The Journal of Machine Learning Research
Ranking from pairs and triplets: information quality, evaluation methods and query complexity
Proceedings of the fourth ACM international conference on Web search and data mining
Margin-Based active learning for structured output spaces
ECML'06 Proceedings of the 17th European conference on Machine Learning
Multi-prototype label ranking with novel pairwise-to-total-rank aggregation
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.00 |
Given a set V of n elements we wish to linearly order them given pairwise preference labels which may be non-transitive (due to irrationality or arbitrary noise). The goal is to linearly order the elements while disagreeing with as few pairwise preference labels as possible. Our performance is measured by two parameters: The number of disagreements (loss) and the query complexity (number of pairwise preference labels). Our algorithm adaptively queries at most O(ε-6n log5 n) preference labels for a regret of ε times the optimal loss. As a function of n, this is asymptotically better than standard (non-adaptive) learning bounds achievable for the same problem. Our main result takes us a step closer toward settling an open problem posed by learning-to-rank (from pairwise information) theoreticians and practitioners: What is a provably correct way to sample preference labels? To further show the power and practicality of our solution, we analyze a typical test case in which a large margin linear relaxation is used for efficiently solving the simpler learning problems in our decomposition.