C4.5: programs for machine learning
C4.5: programs for machine learning
Machine Learning
Solving the multiple instance problem with axis-parallel rectangles
Artificial Intelligence
Learning to classify incomplete examples
Computational learning theory and natural learning systems: Volume IV
Logical settings for concept-learning
Artificial Intelligence
Data mining: practical machine learning tools and techniques with Java implementations
Data mining: practical machine learning tools and techniques with Java implementations
Resource-bounded Relational Reasoning: Induction and Deduction Through Stochastic Matching
Machine Learning - Special issue on multistrategy learning
New Generation Computing
Learning Horn Expressions with LogAn-H
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
On Multi-class Problems and Discretization in Inductive Logic Programming
ISMIS '97 Proceedings of the 10th International Symposium on Foundations of Intelligent Systems
Learning Structurally Indeterminate Clauses
ILP '98 Proceedings of the 8th International Workshop on Inductive Logic Programming
Techniques for Dealing with Missing Values in Classification
IDA '97 Proceedings of the Second International Symposium on Advances in Intelligent Data Analysis, Reasoning about Data
Handling Missing Values when Applying Classification Models
The Journal of Machine Learning Research
Learning from incomplete data with infinite imputations
Proceedings of the 25th international conference on Machine learning
Hi-index | 0.00 |
We investigate here concept learning from incomplete examples, denoted here as ambiguous . We start from the learning from interpretations setting introduced by L. De Raedt and then follow the informal ideas presented by H. Hirsh to extend the Version space paradigm to incomplete data: a hypothesis has to be compatible with all pieces of information provided regarding the examples. We propose and experiment an algorithm that given a set of ambiguous examples, learn a concept as an existential monotone DNF. We show that 1) boolean concepts can be learned, even with very high incompleteness level as long as enough information is provided, and 2) monotone, non monotone DNF (i.e. including negative literals), and attribute-value hypotheses can be learned that way, using an appropriate background knowledge. We also show that a clever implementation, based on a multi-table representation is necessary to apply the method with high levels of incompleteness.