Communications of the ACM
Information Processing Letters
Learnability with respect to fixed distributions
Theoretical Computer Science
Measuring the VC-dimension of a learning machine
Neural Computation
Machine Learning
Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
A Winnow-Based Approach to Context-Sensitive Spelling Correction
Machine Learning - Special issue on natural language learning
Machine Learning - The Eleventh Annual Conference on computational Learning Theory
Improving Performance in Neural Networks Using a Boosting Algorithm
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Toward a Theory of Learning Coherent Concepts
Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence
An Algorithmic Theory of Learning: Robust Concepts and Random Projection
FOCS '99 Proceedings of the 40th Annual Symposium on Foundations of Computer Science
Generalization Bounds for Linear Learning Algorithm
Generalization Bounds for Linear Learning Algorithm
Hi-index | 0.00 |
This paper develops a theory for learning scenarios where multiple learners co-exist but there are mutual coherency constraints on their outcomes. This is natural in cognitive learning situations, where "natural" constraints are imposed on the outcomes of classifiers so that a valid sentence, image or any other domain representation is produced. We formalize these learning situations, after a model suggested in [11] and study generalization abilities of learning algorithms under these conditions in several frameworks. We show that the mere existence of coherency constraints, even without the learner's awareness of them, deems the learning problem easier than predicted by general theories and explains the ability to generalize well from a fairly small number of examples. In particular, it is shown that within this model one can develop an understanding to several realistic learning situations such as highly biased training sets and low dimensional data that is embedded in high dimensional instance spaces.