Principles of artificial intelligence
Principles of artificial intelligence
Genetic programming: on the programming of computers by means of natural selection
Genetic programming: on the programming of computers by means of natural selection
The sciences of the artificial (3rd ed.)
The sciences of the artificial (3rd ed.)
Equivalence between AND/OR graphs and context-free grammars
Communications of the ACM
Types and programming languages
Types and programming languages
ICES '96 Proceedings of the First International Conference on Evolvable Systems: From Biology to Hardware
Repeat Learning Using Predicate Invention
ILP '98 Proceedings of the 8th International Workshop on Inductive Logic Programming
Identifying hierarchical structure in sequences: a linear-time algorithm
Journal of Artificial Intelligence Research
Duce, an oracle-based approach to constructive induction
IJCAI'87 Proceedings of the 10th international joint conference on Artificial intelligence - Volume 1
IJCAI'85 Proceedings of the 9th international joint conference on Artificial intelligence - Volume 1
The minimum description length principle in coding and modeling
IEEE Transactions on Information Theory
Hi-index | 0.00 |
Suppose a learner is faced with a domain of problems about which it knows nearly nothing. It does not know the distribution of problems, the space of solutions is not smooth, and the reward signal is uninformative, providing perhaps a few bits of information but not enough to steer the learner effectively. How can such a learner ever get off the ground? A common intuition is that if the solutions to these problems share a common structure, and the learner can solve some simple problems by brute force, it should be able to extract useful components from these solutions and, by composing them, explore the solution space more efficiently. Here, we formalize this intuition, where the solution space is that of typed functional programs and the gained information is stored as a stochastic grammar over programs. We propose an iterative procedure for exploring such spaces: in the first step of each iteration, the learner explores a finite subset of the domain, guided by a stochastic grammar; in the second step, the learner compresses the successful solutions from the first step to estimate a new stochastic grammar. We test this procedure on symbolic regression and Boolean circuit learning and show that the learner discovers modular concepts for these domains. Whereas the learner is able to solve almost none of the posed problems in the procedure's first iteration, it rapidly becomes able to solve a large number by gaining abstract knowledge of the structure of the solution space.