C4.5: programs for machine learning
C4.5: programs for machine learning
General and Efficient Multisplitting of Numerical Attributes
Machine Learning
Refining Numerical Constants in First Order Logic Theories
Machine Learning - Special issue on multistrategy learning
Inductive Logic Programming: Techniques and Applications
Inductive Logic Programming: Techniques and Applications
On Changing Continuous Attributes into Ordered Discrete Attributes
EWSL '91 Proceedings of the European Working Session on Machine Learning
Ideal Theory Refinement under Object Identity
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Selective Propositionalization for Relational Learning
PKDD '99 Proceedings of the Third European Conference on Principles of Data Mining and Knowledge Discovery
Learning Structurally Indeterminate Clauses
ILP '98 Proceedings of the 8th International Workshop on Inductive Logic Programming
Network Structuring and Training Using Rule-Based Knowledge
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Inductive learning from numerical and symbolic data: An integrated framework
Intelligent Data Analysis
Guest editorial: special issue on Inductive Logic Programming
Machine Learning
SMART+: a multi-strategy learning tool
IJCAI'93 Proceedings of the 13th international joint conference on Artifical intelligence - Volume 2
A method for handling numerical attributes in GA-based inductive concept learners
GECCO'03 Proceedings of the 2003 international conference on Genetic and evolutionary computation: PartI
Integrating in-process software defect prediction with association mining to discover defect pattern
Information and Software Technology
Hi-index | 0.00 |
Machine Learning systems are often distinguished according to the kind of representation they use, which can be either propositional or first-order logic. The framework working with first-order logic as a representation language for both the learned theories and the observations is known as Inductive Logic Programming (ILP). It has been widely shown in the literature that ILP systems have limitations in dealing with large amounts of numerical information, that is however a peculiarity of most real-world application domains. In this work we present a strategy to handle such information in a relational learning incremental setting and its integration with classical symbolic approaches to theory revision. Experiments were carried out on a real-world domain and a comparison with a state-of-art system is reported.