The covering number in learning theory
Journal of Complexity
Optimizing search engines using clickthrough data
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
An efficient boosting algorithm for combining preferences
The Journal of Machine Learning Research
Support Vector Machine Soft Margin Classifiers: Error Analysis
The Journal of Machine Learning Research
Learning to rank using gradient descent
ICML '05 Proceedings of the 22nd international conference on Machine learning
Magnitude-preserving ranking algorithms
Proceedings of the 24th international conference on Machine learning
A Fast Algorithm for Learning a Ranking Function from Large-Scale Data Sets
IEEE Transactions on Pattern Analysis and Machine Intelligence
Learning with sample dependent hypothesis spaces
Computers & Mathematics with Applications
Generalization Bounds for Ranking Algorithms via Algorithmic Stability
The Journal of Machine Learning Research
The P-Norm Push: A Simple Convex Ranking Algorithm that Concentrates at the Top of the List
The Journal of Machine Learning Research
An alternative ranking problem for search engines
WEA'07 Proceedings of the 6th international conference on Experimental algorithms
Active reranking for web image search
IEEE Transactions on Image Processing
Least square regression with lp-coefficient regularization
Neural Computation
Ensemble Manifold Regularization
IEEE Transactions on Pattern Analysis and Machine Intelligence
Sparse transfer learning for interactive video search reranking
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
Capacity of reproducing kernel spaces in learning theory
IEEE Transactions on Information Theory
Statistical Analysis of Bayes Optimal Subset Ranking
IEEE Transactions on Information Theory
On ranking and generalization bounds
The Journal of Machine Learning Research
Full length article: The convergence rate of a regularized ranking algorithm
Journal of Approximation Theory
Information Sciences: an International Journal
Learning theory approach to minimum error entropy criterion
The Journal of Machine Learning Research
Hi-index | 0.01 |
The regularized kernel methods for ranking problem have attracted increasing attention recently, which are usually based on the regularization scheme in a reproducing kernel Hilbert space. In this paper, we go beyond this framework by investigating the generalization ability of ranking with coefficient-based regularization. A regularized ranking algorithm with a data-dependent hypothesis space is proposed and its representer theorem is proved. The generalization error bound is established in terms of the covering numbers of the hypothesis space. Different from the previous analysis relying on Mercer kernels, our theoretical analysis is based on much general kernel function, which is not necessarily symmetric or positive semi-definite. Empirical results on the benchmark datasets demonstrate the effectiveness of the coefficient-based algorithm.