Improve Decision Trees for Probability-Based Ranking by Lazy Learners

  • Authors:
  • Han Liang;Yuhong Yan

  • Affiliations:
  • University of New Brunswick, Canada;National Research Council of Canada, Canada

  • Venue:
  • ICTAI '06 Proceedings of the 18th IEEE International Conference on Tools with Artificial Intelligence
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Existing work shows that classic decision trees have inherent deficiencies in obtaining a good probability-based ranking (e.g. AUC). This paper aims to improve the ranking performance under decision-tree paradigms by presenting two new models. The intuition behind our work is that probability-based ranking is a relative metric among samples, therefore, distinct probabilities are crucial for accurate ranking. The first model, Lazy Distance-based Tree (LDTree), uses a lazy learner at each leaf to explicitly distinguish the different contributions of leaf samples when estimating the probabilities for an unlabeled sample. The second model, Eager Distance-based Tree (EDTree), improves LDTree by changing it into an eager algorithm. In both models, each unlabeled sample is assigned a set of unique probabilities of class membership instead of a set of uniformed ones, which gives finer resolution to differentiate samples and leads to the improvement of ranking. On 34 UCI sample sets, experiments verify that our models greatly outperform C4.5, C4.4 and other standard smoothing methods designed for better ranking.