C4.5: programs for machine learning
C4.5: programs for machine learning
Learning Decision Trees Using the Area Under the ROC Curve
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
The Case against Accuracy Estimation for Comparing Induction Algorithms
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Tree Induction for Probability-Based Ranking
Machine Learning
Data mining in metric space: an empirical analysis of supervised learning performance criteria
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Optimising area under the ROC curve using gradient descent
ICML '04 Proceedings of the twenty-first international conference on Machine learning
ICML '04 Proceedings of the twenty-first international conference on Machine learning
AUC: a statistically consistent and more discriminating measure than accuracy
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Performance Measures in Classification of Human Communications
CAI '07 Proceedings of the 20th conference of the Canadian Society for Computational Studies of Intelligence on Advances in Artificial Intelligence
An experimental comparison of performance measures for classification
Pattern Recognition Letters
A systematic analysis of performance measures for classification tasks
Information Processing and Management: an International Journal
Hi-index | 0.00 |
Evaluation measures play an important role in machine learning because they are used not only to compare different learning algorithms, but also often as goals to optimize in constructing learning models. Both formal and empirical work has been published in comparing evaluation measures. In this paper, we propose a general approach to construct new measures based on the existing ones, and we prove that the new measures are consistent with, and finer than, the existing ones. We also show that the new measure is more correlated to RMS (Root Mean Square error) with artificial datasets. Finally, we demonstrate experimentally that the greedy-search based algorithm (such as artificial neural networks) trained with the new and finer measure usually can achieve better prediction performance. This provides a general approach to improve the predictive performance of existing learning algorithms based on greedy search.