Learning in the presence of malicious errors
STOC '88 Proceedings of the twentieth annual ACM symposium on Theory of computing
Discovering the set of fundamental rule changes
Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining
Machine Learning
Reliable Classifications with Machine Learning
ECML '02 Proceedings of the 13th European Conference on Machine Learning
Characterizing Model Erros and Differences
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Theory Refinement with Noisy Data
Theory Refinement with Noisy Data
Computer Organization and Design
Computer Organization and Design
Hi-index | 0.00 |
The accuracy of the rules produced by a concept learning system can be hindered by the presence of errors in the data. Although these errors are most commonly attributed to random noise, there also exist “ill-defined” attributes that are too general or too specific that can produce systematic classification errors. We present a computer program called Newton which uses the fact that ill-defined attributes create an ordered error pattern among the instances to compute hypotheses explaining the classification errors of a concept in terms of too general or too specific attributes. Extensive empirical testing shows that Newton identifies such attributes with a prediction rate over 95%.