Flexibly exploiting prior knowledge in empirical learning

  • Authors:
  • Julio Ortega;Doug Fisher

  • Affiliations:
  • Computer Science Department, Vanderbilt University, Nashville, Tennessee;Computer Science Department, Vanderbilt University, Nashville, Tennessee

  • Venue:
  • IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
  • Year:
  • 1995

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a method to incorporate knowledge from possibly imperfect models and domain theories into inductive learning of decision trees for classification. The approach assumes that a model or domain theory reflects useful prior knowledge of the task. Thus the default bias should accept the models predictions as accurate even in the face of somewhat contradictory data which may be unrepresentative or noisy. However our approach allows the system to abandon the model or domain theory, or portions thereof in the fact of sufficiently contradictory data. In particular we use C4.5 to induce decision trees from data that has heen augmented by model or domaintheory-derived features. We weakly bias the system to select model-derived features during decision tree induction but this preference is not dogmatically applied. Our experiments very imperfection in a model the representativeness of data and the veracitv with which model-demed feature are preferred.