Machine learning of inductive bias
Machine learning of inductive bias
On generating all maximal independent sets
Information Processing Letters
Quantifying inductive bias: AI learning algorithms and Valiant's learning framework
Artificial Intelligence
Computational limitations on learning from examples
Journal of the ACM (JACM)
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Machine Learning
An introduction to computational learning theory
An introduction to computational learning theory
On the Learnability of Disjunctive Normal Form Formulas
Machine Learning
Identifying the Minimal Transversals of a Hypergraph and Related Problems
SIAM Journal on Computing
Induction as knowledge integration
Induction as knowledge integration
On the complexity of dualization of monotone disjunctive normal forms
Journal of Algorithms
Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Complexity theoretic hardness results for query learning
Computational Complexity
Machine Learning
Computers and Intractability: A Guide to the Theory of NP-Completeness
Computers and Intractability: A Guide to the Theory of NP-Completeness
New Results on Monotone Dualization and Generating Hypergraph Transversals
SIAM Journal on Computing
Learning Conjunctive Concepts in Structural Domains
Machine Learning
Machine Learning
A Version Space Approach to Learning Context-free Grammars
Machine Learning
Programming by Demonstration Using Version Space Algebra
Machine Learning
Learning approximate preconditions for methods in hierarchical plans
ICML '05 Proceedings of the 22nd international conference on Machine learning
Version Space Support Vector Machines
Proceedings of the 2006 conference on ECAI 2006: 17th European Conference on Artificial Intelligence August 29 -- September 1, 2006, Riva del Garda, Italy
Acquiring constraint networks using a SAT-based version space algorithm
AAAI'06 proceedings of the 21st national conference on Artificial intelligence - Volume 2
k-version-space multi-class classification based on k-consistency tests
ECML PKDD'10 Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part III
A SAT-based version space algorithm for acquiring constraint satisfaction problems
ECML'05 Proceedings of the 16th European conference on Machine Learning
Hi-index | 0.00 |
A version space is a collection of concepts consistent with a given set of positive and negative examples. Mitchell [Artificial Intelligence 18 (1982) 203-226] proposed representing a version space by its boundary sets: the maximally general (G) and maximally specific consistent concepts (S). For many simple concept classes, the size of G and S is known to grow exponentially in the number of positive and negative examples. This paper argues that previous work on alternative representations of version spaces has disguised the real question underlying version space reasoning. We instead show that tractable reasoning with version spaces turns out to depend on the consistency problem, i.e., determining if there is any concept consistent with a set of positive and negative examples. Indeed, we show that tractable version space reasoning is possible if and only if there is an efficient algorithm for the consistency problem. Our observations give rise to new concept classes for which tractable version space reasoning is now possible, e.g., 1-decision lists, monotone depth two formulas, and halfspaces.