Sparse Greedy Matrix Approximation for Machine Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Discriminative Reranking for Natural Language Parsing
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Optimizing search engines using clickthrough data
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
Magnitude-preserving ranking algorithms
Proceedings of the 24th international conference on Machine learning
Regularized least-squares for parse ranking
IDA'05 Proceedings of the 6th international conference on Advances in Intelligent Data Analysis
An efficient algorithm for learning to rank from preference graphs
Machine Learning
On Learning and Cross-Validation with Decomposed Nyström Approximation of Kernel Matrix
Neural Processing Letters
Regularized vector field learning with sparse approximation for mismatch removal
Pattern Recognition
Hi-index | 0.00 |
Learning preferences between objects constitutes a challenging task that notably differs from standard classification or regression problems. The objective involves prediction of ordering of the data points. Furthermore, methods for learning preference relations usually are computationally more demanding than standard classification or regression methods. Recently, we have proposed a kernel based preference learning algorithm, called RankRLS, whose computational complexity is cubic with respect to the number of training examples. The algorithm is based on minimizing a regularized least-squares approximation of a ranking error function that counts the number of incorrectly ranked pairs of data points. When nonlinear kernel functions are used, the training of the algorithm might be infeasible if the amount of examples is large. In this paper, we propose a sparse approximation of RankRLS whose training complexity is considerably lower than that of basic RankRLS. In our experiments, we consider parse ranking, a common problem in natural language processing. We show that sparse RankRLS significantly outperforms basic RankRLS in this task. To conclude, the advantage of sparse RankRLS is the computational efficiency when dealing with large amounts of training data together with high dimensional feature representations.