Probabilistic first-order theory revision from examples

  • Authors:
  • Aline Paes;Kate Revoredo;Gerson Zaverucha;Vitor Santos Costa

  • Affiliations:
  • Department of Systems Engineering and Computer Science – COPPE, Federal University of Rio de Janeiro (UFRJ), Rio de Janeiro, RJ, Brasil;Department of Systems Engineering and Computer Science – COPPE, Federal University of Rio de Janeiro (UFRJ), Rio de Janeiro, RJ, Brasil;Department of Systems Engineering and Computer Science – COPPE, Federal University of Rio de Janeiro (UFRJ), Rio de Janeiro, RJ, Brasil;Department of Systems Engineering and Computer Science – COPPE, Federal University of Rio de Janeiro (UFRJ), Rio de Janeiro, RJ, Brasil

  • Venue:
  • ILP'05 Proceedings of the 15th international conference on Inductive Logic Programming
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recently, there has been significant work in the integration of probabilistic reasoning with first order logic representations. Learning algorithms for these models have been developed and they all considered modifications in the entire structure. In a previous work we argued that when the theory is approximately correct the use of techniques from theory revision to just modify the structure in places that failed in classification can be a more adequate choice. To score these modifications and choose the best one the log likelihood was used. However, this function was shown not to be well-suited in the propositional Bayesian classification task and instead the conditional log likelihood should be used. In the present paper, we extend this revision system showing the necessity of using specialization operators even when there are no negative examples. Moreover, the results of a theory modified only in places that are responsible for the misclassification of some examples are compared with the one that was modified in the entire structure using three databases and considering four probabilistic score functions, including conditional log likelihood.