Complexity–Based Induction

  • Authors:
  • Darrell Conklin;Ian H. Witten

  • Affiliations:
  • Department of Computing and Information Science, Queen's University, Kingston, Ontario, Canada, K7L 3N6. conklin@qucis.queensu.ca;Department of Computer Science, University of Waikato, Hamilton, New Zealand. ihw@waikato.ac.nz

  • Venue:
  • Machine Learning
  • Year:
  • 1994

Quantified Score

Hi-index 0.00

Visualization

Abstract

A central problem in inductive logic programming is theory evaluation. Without some sort of preference criterion, any two theories that explain a set of examples are equally acceptable. This paper presents a scheme for evaluating alternative inductive theories based on an objective preference criterion. It strives to extract maximal redundancy from examples, transforming structure into randomness. A major strength of the method is its application to learning problems where negative examples of concepts are scarce or unavailable. A new measure called model complexity is introduced, and its use is illustrated and compared with a proof complexity measure on relational learning tasks. The complementarity of model and proof complexity parallels that of model and proof–theoretic semantics. Model complexity, where applicable, seems to be an appropriate measure for evaluating inductive logic theories.