Learning theories using estimation distribution algorithms and (reduced) bottom clauses

  • Authors:
  • Cristiano Grijó Pitangui;Gerson Zaverucha

  • Affiliations:
  • PESC - COPPE, Universidade Federal do Rio de Janeiro, Rio de Janeiro, RJ, Brazil;PESC - COPPE, Universidade Federal do Rio de Janeiro, Rio de Janeiro, RJ, Brazil

  • Venue:
  • ILP'11 Proceedings of the 21st international conference on Inductive Logic Programming
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Genetic Algorithms (GAs) are known for their capacity to explore large search spaces and due to this ability they were applied (to some extent) to Inductive Logic Programming (ILP). Although Estimation of Distribution Algorithms (EDAs) generally perform better than standard GAs, they have not been applied to ILP. This work presents EDA-ILP, an ILP system based on EDA and inverse entailment, and also its extension, the REDA-ILP, which employs the Reduce algorithm in bottom clauses to considerably reduce the search space. Experiments in real-world datasets showed that both systems were successfully compared to Aleph and GA-ILP (another variant of EDA-ILP created replacing the EDA by a standard GA). EDA-ILP was also successfully compared to Progol-QG/GA (and its other variants) in phase transition benchmarks. Additionally, we found that REDA-ILP usually obtains simpler theories than EDA-ILP, more efficiently and with equivalent accuracies. These results show that EDAs provide a good base for stochastic search in ILP.