On the role of procrastination in machine learning
Information and Computation
Elementary formal systems, intrinsic complexity, and procrastination
Information and Computation
Ordinal mind change complexity of language identification
Theoretical Computer Science
From Logic to Logic Programming
From Logic to Logic Programming
Foundations of Inductive Logic Programming
Foundations of Inductive Logic Programming
Classification using information
Annals of Mathematics and Artificial Intelligence
Counting Extensional Differences in BC-Learning
ICGI '00 Proceedings of the 5th International Colloquium on Grammatical Inference: Algorithms and Applications
Learning in Logic with RichProlog
ICLP '02 Proceedings of the 18th International Conference on Logic Programming
A General Theory of Deduction, Induction, and Learning
DS '01 Proceedings of the 4th International Conference on Discovery Science
Mind change efficient learning
Information and Computation
On ordinal VC-dimension and some notions of complexity
Theoretical Computer Science - Algorithmic learning theory
On the data consumption benefits of accepting increased uncertainty
Theoretical Computer Science
Absolute versus probabilistic classification in a logical setting
Theoretical Computer Science
Mind change efficient learning
Information and Computation
Absolute versus probabilistic classification in a logical setting
ALT'05 Proceedings of the 16th international conference on Algorithmic Learning Theory
Mind change efficient learning
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Hi-index | 0.00 |
Many connections have been established between learning and logic, or learning and topology, or logic and topology. Still, the connections are not at the heart of these fields. Each of them is fairly independent of the others when attention is restricted to basic notions and main results. We show that connections can actually be made at a fundamental level, and result in a parametrized logic that needs topological notions for its early developments, and notions from learning theory for interpretation and applicability.One of the key properties of first-order logic is that the classical notion of logical consequence is compact. We generalize the notion of logical consequence, and we generalize compactness to 脽-weak compactness where 脽 is an ordinal. The effect is to stratify the set of generalized logical consequences of a theory into levels, and levels into layers. Deduction corresponds to the lower layer of the first level above the underlying theory, learning with less than 脽 mind changes to layer 脽 of the first level, and learning in the limit to the first layer of the second level. Refinements of Borel-like hierarchies provide the topological tools needed to develop the framework.