Information-Based Evaluation Criterion for Classifier's Performance
Machine Learning
Elements of information theory
Elements of information theory
C4.5: programs for machine learning
C4.5: programs for machine learning
Machine Learning
Machine Learning
The Case against Accuracy Estimation for Comparing Induction Algorithms
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Journal of Artificial Intelligence Research
AI '02 Proceedings of the 15th Australian Joint Conference on Artificial Intelligence: Advances in Artificial Intelligence
MICAI '02 Proceedings of the Second Mexican International Conference on Artificial Intelligence: Advances in Artificial Intelligence
Discrimination-Based criteria for the evaluation of classifiers
FQAS'06 Proceedings of the 7th international conference on Flexible Query Answering Systems
Hi-index | 0.02 |
With the growth of interest in data mining, there has been increasing interest in applying machine learning algorithms to real-world problems. This raises the question of how to evaluate the performance of machine learning algorithms. The standard procedure performs random sampling of predictive accuracy until a statistically significant difference arises between competing algorithms. That procedure fails to take into account the calibration of predictions. An alternative procedure uses an information reward measure (from I.J. Good) which is sensitive both to domain knowledge (predictive accuracy) and calibration. We analyze this measure, relating it to Kullback-Leibler distance. We also apply it to five well-known machine learning algorithms across a variety of problems, demonstrating some variations in their assessments using accuracy vs. information reward. We also look experimentally at information reward as a function of calibration and accuracy.