The complexity of Boolean functions
The complexity of Boolean functions
Computational learning theory: an introduction
Computational learning theory: an introduction
Rough Sets: Theoretical Aspects of Reasoning about Data
Rough Sets: Theoretical Aspects of Reasoning about Data
Machine Learning
An Incremental Learning Algorithm for Constructing Decision Rules
RSKD '93 Proceedings of the International Workshop on Rough Sets and Knowledge Discovery: Rough Sets, Fuzzy Sets and Knowledge Discovery
Classification of Unseen Examples under Uncertainty
Fundamenta Informaticae
A New Version of the Rule Induction System LERS
Fundamenta Informaticae
Hi-index | 0.00 |
We consider the average error rate of classification as a function of the number of training examples. We investigate the upper and the lower bounds of this error in a class of commonly used algorithms based on inductive learning from examples. As a result, we arrive at the astonishing conclusion that, contrary to what one could expect, the error rate of some algorithms does not decrease monotonically with number of training examples; rather, it initially increases up to a certain point and only then it starts to decrease. Furthermore, the classification quality of some algorithms is as poor as that of the naive algorithm. We show that for simple monomials, even if we take an exponentially large training data set, the classification quality of some methods will not be better than if we have taken just one or several training examples.