On the complexity of inductive inference
Information and Control
Classifying learnable geometric concepts with the Vapnik-Chervonenkis dimension
STOC '86 Proceedings of the eighteenth annual ACM symposium on Theory of computing
Theory of recursive functions and effective computability
Theory of recursive functions and effective computability
On-line learning of rectangles
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Learning unions of two rectangles in the plane with equivalence queries
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
Learning unions of boxes with membership and equivalence queries
COLT '94 Proceedings of the seventh annual conference on Computational learning theory
Geometrical concept learning and convex polytopes
COLT '94 Proceedings of the seventh annual conference on Computational learning theory
Learning one-dimensional geometric patterns under one-sided random misclassification noise
COLT '94 Proceedings of the seventh annual conference on Computational learning theory
Language learning from texts: mindchanges, limited memory, and monotonicity
Information and Computation
On the intrinsic complexity of learning
Information and Computation
Concept learning with geometric hypotheses
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
More or less efficient agnostic learning of convex polygons
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
Noise-tolerant parallel learning of geometric concepts
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
The intrinsic complexity of language identification
Journal of Computer and System Sciences
PAC learning of one-dimensional patterns
Machine Learning
Agnostic learning of geometric patterns (extended abstract)
COLT '97 Proceedings of the tenth annual conference on Computational learning theory
The Power of Vacillation in Language Learning
SIAM Journal on Computing
An Introduction to the General Theory of Algorithms
An Introduction to the General Theory of Algorithms
Machine Inductive Inference and Language Identification
Proceedings of the 9th Colloquium on Automata, Languages and Programming
Monotonicity versus Efficiency for Learning Languages from Texts
AII '94 Proceedings of the 4th International Workshop on Analogical and Inductive Inference: Algorithmic Learning Theory
Language Learning From Texts: Degrees of Instrinsic Complexity and Their Characterizations
COLT '00 Proceedings of the Thirteenth Annual Conference on Computational Learning Theory
Hi-index | 0.00 |
Intrinsic complexity is used to measure complexity of learning areas limited by broken-straight lines (called open semi-hulls) and intersections of such areas. Any strategy learning such geometrical concept can be viewed as a sequence of primitive basic strategies. Thus, the length of such a sequence together with complexities of primitive strategies used can be regarded as complexity of learning the concept in question. We obtained best possible lower and upper bounds on learning open semi-hulls, as well as matching upper and lower bounds on complexity of learning intersections of such areas. Surprisingly, upper bounds in both cases turn out to be much lower than those provided by natural learning strategies. Another surprising result is that learning intersections of open semi-hulls (and their complements) turns out to be easier than learning open semi-hulls themselves.