C4.5: programs for machine learning
C4.5: programs for machine learning
Machine Learning
Top-down induction of first-order logical decision trees
Artificial Intelligence
Improved Boosting Algorithms Using Confidence-rated Predictions
Machine Learning - The Eleventh Annual Conference on computational Learning Theory
Robust Classification for Imprecise Environments
Machine Learning
Inductive Logic Programming: Techniques and Applications
Inductive Logic Programming: Techniques and Applications
Machine Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Learning Logical Definitions from Relations
Machine Learning
Involving Aggregate Functions in Multi-relational Search
PKDD '02 Proceedings of the 6th European Conference on Principles of Data Mining and Knowledge Discovery
ALT '95 Proceedings of the 6th International Conference on Algorithmic Learning Theory
Lookahead and Discretization in ILP
ILP '97 Proceedings of the 7th International Workshop on Inductive Logic Programming
ALT '96 Proceedings of the 7th International Workshop on Algorithmic Learning Theory
Aggregation-based feature invention and relational concept classes
Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining
Learning relational probability trees
Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining
Improving the efficiency of inductive logic programming through the use of query packs
Journal of Artificial Intelligence Research
An empirical evaluation of bagging in inductive logic programming
ILP'02 Proceedings of the 12th international conference on Inductive logic programming
Classifying relational data with neural networks
ILP'05 Proceedings of the 15th international conference on Inductive Logic Programming
First-Order Probabilistic Languages: Into the Unknown
Inductive Logic Programming
ReMauve: A Relational Model Tree Learner
Inductive Logic Programming
Seeing the Forest Through the Trees: Learning a Comprehensible Model from an Ensemble
ECML '07 Proceedings of the 18th European conference on Machine Learning
Feature Discovery with Type Extension Trees
ILP '08 Proceedings of the 18th international conference on Inductive Logic Programming
Learning Aggregate Functions with Neural Networks Using a Cascade-Correlation Approach
ILP '08 Proceedings of the 18th international conference on Inductive Logic Programming
Complex aggregates in relational learning
AI Communications - Recommender Systems
A Comparison between Neural Network Methods for Learning Aggregate Functions
DS '08 Proceedings of the 11th International Conference on Discovery Science
ILP-based concept discovery in multi-relational data mining
Expert Systems with Applications: An International Journal
Learning directed probabilistic logical models: ordering-search versus structure-search
Annals of Mathematics and Artificial Intelligence
Relational random forests based on random relational rules
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Seeing the forest through the trees: learning a comprehensible model from a first order ensemble
ILP'07 Proceedings of the 17th international conference on Inductive logic programming
Similarity-Based Classification in Relational Databases
Fundamenta Informaticae
Refining aggregate conditions in relational learning
PKDD'06 Proceedings of the 10th European conference on Principle and Practice of Knowledge Discovery in Databases
Machine Learning Methods For Detecting Patterns Of Management Fraud
Computational Intelligence
Simple decision forests for multi-relational classification
Decision Support Systems
Type Extension Trees for feature construction and learning in relational domains
Artificial Intelligence
Hi-index | 0.00 |
In relational learning, predictions for an individual are based not only on its own properties but also on the properties of a set of related individuals. Relational classifiers differ with respect to how they handle these sets: some use properties of the set as a whole (using aggregation), some refer to properties of specific individuals of the set, however, most classifiers do not combine both. This imposes an undesirable bias on these learners. This article describes a learning approach that avoids this bias, using first order random forests. Essentially, an ensemble of decision trees is constructed in which tests are first order logic queries. These queries may contain aggregate functions, the argument of which may again be a first order logic query. The introduction of aggregate functions in first order logic, as well as upgrading the forest's uniform feature sampling procedure to the space of first order logic, generates a number of complications. We address these and propose a solution for them. The resulting first order random forest induction algorithm has been implemented and integrated in the ACE-ilProlog system, and experimentally evaluated on a variety of datasets. The results indicate that first order random forests with complex aggregates are an efficient and effective approach towards learning relational classifiers that involve aggregates over complex selections.