Communications of the ACM
On the learnability of Boolean formulae
STOC '87 Proceedings of the nineteenth annual ACM symposium on Theory of computing
STOC '87 Proceedings of the nineteenth annual ACM symposium on Theory of computing
Computational limitations on learning from examples
Journal of the ACM (JACM)
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
A Necessary Condition for Learning from Positive Examples
Machine Learning
Equivalence of models for polynomial learnability
COLT '88 Proceedings of the first annual workshop on Computational learning theory
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
Digital Picture Processing
Machine Learning
Extremal Graph Theory
Perceptrons: An Introduction to Computational Geometry
Perceptrons: An Introduction to Computational Geometry
On learning visual concepts and DNF formulae
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
On Learning to Recognize 3-D Objects from Examples
IEEE Transactions on Pattern Analysis and Machine Intelligence
Learning to recognize three-dimensional objects
Neural Computation
A Simulated Annealing-Based Learning Algorithm for Boolean DNF
AI '99 Proceedings of the 12th Australian Joint Conference on Artificial Intelligence: Advanced Topics in Artificial Intelligence
Approximation of Boolean Functions by Local Search
Computational Optimization and Applications
Hi-index | 0.15 |
Valiant's theory of the learnable is applied to visual concepts in digital pictures. Several visual concepts that are easily perceived by humans are shown to be learnable from positive examples. These concepts include a certain type of inaccurate copies of line drawings, identifying a subset of objects at specific locations, and pictures of lines in a fixed slope. Several characterizations of visual concepts by templates are shown to be nonlearnable (in the sense of Valiant) from positive-only examples. The importance of representations is demonstrated by showing that even though one can easily learn to identify pictures with at least one of two objects, identifying the objects is sometimes much harder (computationally infeasible).