Reducing examples in relational learning with bounded-treewidth hypotheses

  • Authors:
  • Ondřej Kuželka;Andrea Szabóová;Filip Železný

  • Affiliations:
  • Faculty of Electrical Engineering, Czech Technical University in Prague, Prague, Czech Republic;Faculty of Electrical Engineering, Czech Technical University in Prague, Prague, Czech Republic;Faculty of Electrical Engineering, Czech Technical University in Prague, Prague, Czech Republic

  • Venue:
  • NFMCP'12 Proceedings of the First international conference on New Frontiers in Mining Complex Patterns
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Feature selection methods often improve the performance of attribute-value learning. We explore whether also in relational learning, examples in the form of clauses can be reduced in size to speed up learning without affecting the learned hypothesis. To this end, we introduce the notion of safe reduction: a safely reduced example cannot be distinguished from the original example under the given hypothesis language bias. Next, we consider the particular, rather permissive bias of bounded treewidth clauses. We show that under this hypothesis bias, examples of arbitrary treewidth can be reduced efficiently. The bounded treewidth bias can be replaced by other assumptions such as acyclicity with similar benefits. We evaluate our approach on four data sets with the popular system Aleph and the state-of-the-art relational learner nFOIL. On all four data sets we make learning faster for nFOIL, achieving an order-of-magnitude speed up on one of the data sets, and more accurate for Aleph.