Communications of the ACM
A general lower bound on the number of examples needed for learning
Information and Computation
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
An introduction to computational learning theory
An introduction to computational learning theory
Characterizations of learnability for classes of {0, …, n}-valued functions
Journal of Computer and System Sciences
The nature of statistical learning theory
The nature of statistical learning theory
On learning from noisy and incomplete examples
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Large margin classification using the perceptron algorithm
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Learning to resolve natural language ambiguities: a unified approach
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
STOC '99 Proceedings of the thirty-first annual ACM symposium on Theory of computing
A Winnow-Based Approach to Context-Sensitive Spelling Correction
Machine Learning - Special issue on natural language learning
Improving Performance in Neural Networks Using a Boosting Algorithm
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Part of speech tagging using a network of linear separators
COLING '98 Proceedings of the 17th international conference on Computational linguistics - Volume 2
IJCAI'99 Proceedings of the 16th international joint conference on Artificial intelligence - Volume 2
Relational learning for NLP using linear threshold elements
IJCAI'99 Proceedings of the 16th international joint conference on Artificial intelligence - Volume 2
Hi-index | 0.00 |
We study learning scenarios in which multiple learners are involved and "nature" imposes some constraints that force the predictions of these learners to behave coherently. This is natural in cognitive learning situations, where multiple learning problems co-exist but their predictions are constrained to produce a valid sentence, image or any other domain representation.Our theory addresses two fundamental issues in computational learning: (1) The apparent ease at which cognitive systems seem to learn concepts, relative to what is predicted by the theoretical models, and (2) The robustness of learnable concepts to noise in their input. This type of robustness is very important in cognitive systems, where multiple concepts are learned and cascaded to produce more and more complex features.Existing models of concept learning are extended by requiring the target concept to cohere with other concepts from the concept class. The coherency is expressed via a (Boolean) constraint that the concepts have to satisfy. We show how coherency can lead to improvements in the complexity of learning and to increased robustness of the learned hypothesis.