Tailoring Representations to Different Requirements
ALT '99 Proceedings of the 10th International Conference on Algorithmic Learning Theory
A programming paradigm for machine learning, with a case study of Bayesian networks
ACSC '06 Proceedings of the 29th Australasian Computer Science Conference - Volume 48
Facility Location Problems: A Parameterized View
AAIM '08 Proceedings of the 4th international conference on Algorithmic Aspects in Information and Management
Agent's actions as a classification criteria for the state space in a learning from rewards system
Journal of Experimental & Theoretical Artificial Intelligence
RHB+: a type-oriented ILP system learning from positive data
IJCAI'97 Proceedings of the Fifteenth international joint conference on Artifical intelligence - Volume 2
Facility location problems: A parameterized view
Discrete Applied Mathematics
Hi-index | 0.00 |
A central problem in inductive logic programming is theory evaluation. Without some sort of preference criterion, any two theories that explain a set of examples are equally acceptable. This paper presents a scheme for evaluating alternative inductive theories based on an objective preference criterion. It strives to extract maximal redundancy from examples, transforming structure into randomness. A major strength of the method is its application to learning problems where negative examples of concepts are scarce or unavailable. A new measure called model complexity is introduced, and its use is illustrated and compared with a proof complexity measure on relational learning tasks. The complementarity of model and proof complexity parallels that of model and proof–theoretic semantics. Model complexity, where applicable, seems to be an appropriate measure for evaluating inductive logic theories.