Probability and plurality for aggregations of learning machines
Information and Computation
Probabilistic inductive inference
Journal of the ACM (JACM)
ML92 Proceedings of the ninth international workshop on Machine learning
C4.5: programs for machine learning
C4.5: programs for machine learning
Learning with the knowledge of an upper bound on program size
Information and Computation
Inference of finite automata using homing sequences
Information and Computation
On the structure of degrees of inferability
Journal of Computer and System Sciences
Learning branches and learning to win closed games
COLT '96 Proceedings of the ninth annual conference on Computational learning theory
Learning recursive functions from approximations
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
The Power of Pluralism for Automatic Program Synthesis
Journal of the ACM (JACM)
The Complexity of Learning Branches and Strategies from Queries
ISAAC '97 Proceedings of the 8th International Symposium on Algorithms and Computation
Structural Measures for Games and Process Control in the Branch Learning Model
EuroCOLT '97 Proceedings of the Third European Conference on Computational Learning Theory
A Framework for Behavioural Cloning
Machine Intelligence 15, Intelligent Agents [St. Catherine's College, Oxford, July 1995]
Hi-index | 0.00 |
The present paper focuses on some interesting classes of process-control games, where winning essentially means successfully controlling the process. A master for one of these games is an agent who plays a winning-strategy. In this paper we investigate situations, in which even a complete model (given by a program) of a particular game does not provide enough information to synthesize -- even in the limit -- a winning strategy. However, if in addition to getting a program, a machine may also watch masters play winning strategies, then the machine is able to learn in the limit a winning strategy for the given game. Studied are successful learning from arbitrary masters and from pedagogically useful selected masters. It is shown that selected masters are strictly more helpful for learning than are arbitrary masters. Both for learning from arbitrary masters and for learning from selected masters, though, there are cases where one can learn programs for winning strategies from masters but not if one is required to learn a program for the master's strategy itself. Both for learning from arbitrary masters and for learning from selected masters, one can learn strictly more watching m + 1 masters than one can learn watching only m. Lastly a simulation result is presented where the presence of a selected master reduces the complexity from infinitely many semantic mind changes to finitely many syntactic ones.