Logical settings for concept-learning
Artificial Intelligence
Constraint Processing
Fast Theta-Subsumption with Constraint Satisfaction Algorithms
Machine Learning
Redundant feature elimination for multi-class problems
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Logical and Relational Learning: From ILP to MRDM (Cognitive Technologies)
Logical and Relational Learning: From ILP to MRDM (Cognitive Technologies)
Handbook of Constraint Programming (Foundations of Artificial Intelligence)
Handbook of Constraint Programming (Foundations of Artificial Intelligence)
Integrating Naïve Bayes and FOIL
The Journal of Machine Learning Research
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
An inductive logic programming approach to validate Hexose binding biochemical knowledge
ILP'09 Proceedings of the 19th international conference on Inductive logic programming
Seeing the world through homomorphism: an experimental study on reducibility of examples
ILP'10 Proceedings of the 20th international conference on Inductive logic programming
ICALP'07 Proceedings of the 34th international conference on Automata, Languages and Programming
Hi-index | 0.00 |
Feature selection methods often improve the performance of attribute-value learning. We explore whether also in relational learning, examples in the form of clauses can be reduced in size to speed up learning without affecting the learned hypothesis. To this end, we introduce the notion of safe reduction: a safely reduced example cannot be distinguished from the original example under the given hypothesis language bias. Next, we consider the particular, rather permissive bias of bounded treewidth clauses. We show that under this hypothesis bias, examples of arbitrary treewidth can be reduced efficiently. The bounded treewidth bias can be replaced by other assumptions such as acyclicity with similar benefits. We evaluate our approach on four data sets with the popular system Aleph and the state-of-the-art relational learner nFOIL. On all four data sets we make learning faster for nFOIL, achieving an order-of-magnitude speed up on one of the data sets, and more accurate for Aleph.