Machine Learning - Special issue on learning with probabilistic representations
ILP '96 Selected Papers from the 6th International Workshop on Inductive Logic Programming
Relational Learning Using Constrained Confidence-Rated Boosting
ILP '01 Proceedings of the 11th International Conference on Inductive Logic Programming
ALT '96 Proceedings of the 7th International Workshop on Algorithmic Learning Theory
Learning relational probability trees
Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining
nFOIL: integrating Naïve Bayes and FOIL
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 2
View learning for statistical relational learning: with an application to mammography
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
ROCCER: an algorithm for rule learning based on ROC analysis
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
An empirical evaluation of bagging in inductive logic programming
ILP'02 Proceedings of the 12th international conference on Inductive logic programming
Learning Bayesian networks of rules with SAYU
MRDM '05 Proceedings of the 4th international workshop on Multi-relational mining
Integrating Naïve Bayes and FOIL
The Journal of Machine Learning Research
An integrated approach to feature invention and model construction for drug activity prediction
Proceedings of the 24th international conference on Machine learning
k-RNN: k-relational nearest neighbour algorithm
Proceedings of the 2008 ACM symposium on Applied computing
Discriminative structure and parameter learning for Markov logic networks
Proceedings of the 25th international conference on Machine learning
Discriminative Structure Learning of Markov Logic Networks
ILP '08 Proceedings of the 18th international conference on Inductive Logic Programming
Deep transfer via second-order Markov logic
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
An Inductive Logic Programming Approach to Statistical Relational Learning
Proceedings of the 2005 conference on An Inductive Logic Programming Approach to Statistical Relational Learning
kFOIL: learning simple relational kernels
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
Change of representation for statistical relational learning
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
CLP(BN): constraint logic programming for probabilistic knowledge
Probabilistic inductive logic programming
An inductive logic programming approach to validate Hexose binding biochemical knowledge
ILP'09 Proceedings of the 19th international conference on Inductive logic programming
Boosting first-order clauses for large, skewed data sets
ILP'09 Proceedings of the 19th international conference on Inductive logic programming
An efficient approximation to lookahead in relational learners
ECML'06 Proceedings of the 17th European conference on Machine Learning
Conceptual clustering of multi-relational data
ILP'11 Proceedings of the 21st international conference on Inductive Logic Programming
Transforming graph data for statistical relational learning
Journal of Artificial Intelligence Research
Lifted variable elimination: decoupling the operators from the constraint language
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
Inductive Logic Programming (ILP) is a popular approach for learning rules for classification tasks. An important question is how to combine the individual rules to obtain a useful classifier. In some instances, converting each learned rule into a binary feature for a Bayes net learner improves the accuracy compared to the standard decision list approach [3,4,14]. This results in a two-step process, where rules are generated in the first phase, and the classifier is learned in the second phase. We propose an algorithm that interleaves the two steps, by incrementally building a Bayes net during rule learning. Each candidate rule is introduced into the network, and scored by whether it improves the performance of the classifier. We call the algorithm SAYU for Score As You Use. We evaluate two structure learning algorithms Naïve Bayes and Tree Augmented Naïve Bayes. We test SAYU on four different datasets and see a significant improvement in two out of the four applications. Furthermore, the theories that SAYU learns tend to consist of far fewer rules than the theories in the two-step approach.