A Further Comparison of Splitting Rules for Decision-Tree Induction
Machine Learning
C4.5: programs for machine learning
C4.5: programs for machine learning
From highly relevant to not relevant: examining different regions of relevance
Information Processing and Management: an International Journal
Machine Learning
Machine Learning
An efficient boosting algorithm for combining preferences
The Journal of Machine Learning Research
Ranking and Reranking with Perceptron
Machine Learning
A content-search information retrieval process based on conceptual graphs
Knowledge and Information Systems
Gaussian Processes for Ordinal Regression
The Journal of Machine Learning Research
Learning to rank using gradient descent
ICML '05 Proceedings of the 22nd international conference on Machine learning
Multiple labels associative classification
Knowledge and Information Systems
Using discriminant analysis for multi-class classification: an experimental investigation
Knowledge and Information Systems
Ranking-based evaluation of regression models
Knowledge and Information Systems
Journal of Artificial Intelligence Research
Large-margin feature selection for monotonic classification
Knowledge-Based Systems
Slash-based relevance propagation model for topic distillation
Journal of Web Engineering
Hi-index | 0.00 |
Ranking problems have recently become an important research topic in the joint field of machine learning and information retrieval. This paper presented a new splitting rule that introduces a metric, i.e., an impurity measure, to construct decision trees for ranking tasks. We provided a theoretical basis and some intuitive explanations for the splitting rule. Our approach is also meaningful to collaborative filtering in the sense of dealing with categorical data and selecting relative features. Some experiments were made to illustrate our ranking approach, whose results showed that our algorithm outperforms both perceptron-based ranking and the classification tree algorithms in term of accuracy as well as speed.