A study of the behavior of several methods for balancing machine learning training data
ACM SIGKDD Explorations Newsletter - Special issue on learning from imbalanced datasets
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
The class imbalance problem: A systematic study
Intelligent Data Analysis
A spatio-temporal Bayesian network classifier for understanding visual field deterioration
Artificial Intelligence in Medicine
On sample size and classification accuracy: a performance comparison
ISBMDA'05 Proceedings of the 6th International conference on Biological and Medical Data Analysis
Hi-index | 0.00 |
Objective: In this paper we present an evaluation of the role of reliability indicators in glaucoma severity prediction. In particular, we investigate whether it is possible to extract useful information from tests that would be normally discarded because they are considered unreliable. Methods: We set up a predictive modelling framework to predict glaucoma severity from visual field (VF) tests sensitivities in different reliability scenarios. Three quality indicators were considered in this study: false positives rate, false negatives rate and fixation losses. Glaucoma severity was evaluated by considering a 3-levels version of the Advanced Glaucoma Intervention Study scoring metric. A bootstrapping and class balancing technique was designed to overcome problems related to small sample size and unbalanced classes. As a classification model we selected Naive Bayes. We also evaluated Bayesian networks to understand the relationships between the different anatomical sectors on the VF map. Results: The methods were tested on a data set of 28,778 VF tests collected at Moorfields Eye Hospital between 1986 and 2010. Applying Friedman test followed by the post hoc Tukey's honestly significant difference test, we observed that the classifiers trained on any kind of test, regardless of its reliability, showed comparable performance with respect to the classifier trained only considering totally reliable tests (p-value0.01). Moreover, we showed that different quality indicators gave different effects on prediction results. Training classifiers using tests that exceeded the fixation losses threshold did not have a deteriorating impact on classification results (p-value0.01). On the contrary, using only tests that fail to comply with the constraint on false negatives significantly decreased the accuracy of the results (p-value