Communications of the ACM
On the complexity of inductive inference
Information and Control
Probability and plurality for aggregations of learning machines
Information and Computation
Probabilistic inductive inference
Journal of the ACM (JACM)
Trade-off among parameters affecting inductive inference
Information and Computation
COLT '90 Proceedings of the third annual workshop on Computational learning theory
Relations between probabilistic and team one-shot learners (extended abstract)
COLT '91 Proceedings of the fourth annual workshop on Computational learning theory
Breaking the probability ½ barrier in FIN-type learning
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Capabilities of probabilistic learners with bounded mind changes
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
The Power of Pluralism for Automatic Program Synthesis
Journal of the ACM (JACM)
Inductive Inference: Theory and Methods
ACM Computing Surveys (CSUR)
An Introduction to the General Theory of Algorithms
An Introduction to the General Theory of Algorithms
Use of Reduction Arguments in Determining Popperian FIN-Type Learning Capabilities
ALT '93 Proceedings of the 4th International Workshop on Algorithmic Learning Theory
Probabilistic and Pluralistic Learners with Mind Changes
MFCS '92 Proceedings of the 17th International Symposium on Mathematical Foundations of Computer Science
Taming teams with mind changes
Journal of Computer and System Sciences
Learning Behaviors of Functions
Fundamenta Informaticae
Capabilities of Thoughtful Machines
Fundamenta Informaticae
Hi-index | 0.00 |
We consider the inductive inference model of Gold [15]. Suppose we are given a set of functions that are learnable with certain number of mind changes and errors. What can we consistently predict about those functions if we are allowed fewer mind changes or errors? In [20] we relaxed the notion of exact learning by considering some higher level properties of the input-output behavior of a given function. in this context, a learner produces a program that describes the property of a given function. Can we predict generic properties such as threshold or modality if we allow fewer number of mind changes or errors? These questions were completely answered in [20] when the learner is restricted to a single IIM. In this paper we allow a team of IIMs to collaborate in the learning process. The learning is considered to be successful if any one of the team member succeeds. A motivation for this extension is to understand and characterize properties that are learnable for a given set of functions in a team environment.