Foundations of logic programming
Foundations of logic programming
Computational limitations on learning from examples
Journal of the ACM (JACM)
Crytographic limitations on learning Boolean formulae and finite automata
STOC '89 Proceedings of the twenty-first annual ACM symposium on Theory of computing
COLT '89 Proceedings of the second annual workshop on Computational learning theory
Editorial: Advice to Machine Learning Authors
Machine Learning
New Generation Computing - Selected papers from the international workshop on algorithmic learning theory,1990
Essentials of logic programming
Essentials of logic programming
Learning nonrecursive definitions of relations with LINUS
EWSL-91 Proceedings of the European working session on learning on Machine learning
Induction as nonmonotonic inference
Proceedings of the first international conference on Principles of knowledge representation and reasoning
Bounds on the sample complexity of Bayesian learning using information theory and the VC dimension
COLT '91 Proceedings of the fourth annual workshop on Computational learning theory
PAC-learnability of determinate logic programs
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Compression, significance and accuracy
ML92 Proceedings of the ninth international workshop on Machine learning
ML92 Proceedings of the ninth international workshop on Machine learning
On the theory of average case complexity
Journal of Computer and System Sciences
Inductive logic programming: derivations, successes and shortcomings
ACM SIGART Bulletin
Inductive Logic Programming: Techniques and Applications
Inductive Logic Programming: Techniques and Applications
Some Lower Bounds for the Computational Complexity of Inductive Logic Programming
ECML '93 Proceedings of the European Conference on Machine Learning
Learning one-variable pattern languages in linear average time
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Logic-based artificial intelligence
Can We Make Information Extraction More Adaptive?
Information Extraction: Towards Scalable, Adaptable Systems
Filtering Multi-Instance Problems to Reduce Dimensionality in Relational Learning
Journal of Intelligent Information Systems
COLING '04 Proceedings of the 20th international conference on Computational Linguistics
Diagnosis using a first-order stochastic language that learns
Expert Systems with Applications: An International Journal
From learning in the limit to stochastic finite learning
Theoretical Computer Science - Algorithmic learning theory
A phenotypic genetic algorithm for inductive logic programming
Expert Systems with Applications: An International Journal
Natural Language Processing as a Foundation of the Semantic Web
Foundations and Trends in Web Science
Tractable induction and classification in first order logic via stochastic matching
IJCAI'97 Proceedings of the Fifteenth international joint conference on Artifical intelligence - Volume 2
Support vector inductive logic programming
DS'05 Proceedings of the 8th international conference on Discovery Science
Hi-index | 0.00 |
Inductive Logic Programming (ILP) involves the construction of first-order definite clause theories from examples and background knowledge. Unlike both traditional Machine Learning and Computational Learning Theory, ILP is based on lock-step development of Theory, Implementations and Applications. ILP systems have successful applications in the learning of structure-activity rules for drug design, semantic grammars rules, finite element mesh design rules and rules for prediction of protein structure and mutagenic molecules. The strong applications in ILP can be contrasted with relatively weak PAC-learning results (even highly restricted forms of logic programs are known to be prediction-hard). It has been recently argued that the mismatch is due to distributional assumptions made in application domains. These assumptions can be modelled as a Bayesian prior probability representing subjective degrees of belief. Other authors have argued for the use of Bayesian prior distributions for reasons different to those here, though this has not lead to a new model of polynomial-time learnability. Incorporation of Bayesian prior distributions over time-bounded hypotheses in PAC leads to a new model called U-learnability. It is argued that U-learnability is more appropriate than PAC for Universal (Turing computable) languages. Time-bounded logic programs have been shown to be polynomially U-learnable under certain distributions. The use of time bounded hypotheses enforces decidability and allows a unified characterization of speed-up learning and inductive learning. U-learnability has as special cases PAC and Natarajan's model of speed-up learning.