The nature of statistical learning theory
The nature of statistical learning theory
Ridge Regression Learning Algorithm in Dual Variables
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
The Case against Accuracy Estimation for Comparing Induction Algorithms
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Everything old is new again: a fresh look at historical approaches in machine learning
Everything old is new again: a fresh look at historical approaches in machine learning
SVM vs Regularized Least Squares Classification
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 1 - Volume 01
The Entire Regularization Path for the Support Vector Machine
The Journal of Machine Learning Research
Using AUC and Accuracy in Evaluating Learning Algorithms
IEEE Transactions on Knowledge and Data Engineering
Generalization Bounds for the Area Under the ROC Curve
The Journal of Machine Learning Research
ROC analysis in ordinal regression learning
Pattern Recognition Letters
A critical analysis of variants of the AUC
Machine Learning
An efficient algorithm for learning to rank from preference graphs
Machine Learning
Estimating classification error rate: Repeated cross-validation, repeated hold-out and bootstrap
Computational Statistics & Data Analysis
Finite sample error bound for Parzen windows
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 2
A study of cross-validation and bootstrap for accuracy estimation and model selection
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
Small-sample precision of ROC-related estimates
Bioinformatics
Small-sample precision of ROC-related estimates
Bioinformatics
Bundle Methods for Regularized Risk Minimization
The Journal of Machine Learning Research
An alternative ranking problem for search engines
WEA'07 Proceedings of the 6th international conference on Experimental algorithms
Apples-to-apples in cross-validation studies: pitfalls in classifier performance measurement
ACM SIGKDD Explorations Newsletter
On the ERA ranking representability of pairwise bipartite ranking functions
Artificial Intelligence
Bootstrapping a Game with a Purpose for Commonsense Collection
ACM Transactions on Intelligent Systems and Technology (TIST)
Computers and Electronics in Agriculture
Hi-index | 0.03 |
Reliable estimation of the classification performance of inferred predictive models is difficult when working with small data sets. Cross-validation is in this case a typical strategy for estimating the performance. However, many standard approaches to cross-validation suffer from extensive bias or variance when the area under the ROC curve (AUC) is used as the performance measure. This issue is explored through an extensive simulation study. Leave-pair-out cross-validation is proposed for conditional AUC-estimation, as it is almost unbiased, and its deviation variance is as low as that of the best alternative approaches. When using regularized least-squares based learners, efficient algorithms exist for calculating the leave-pair-out cross-validation estimate.