Machine Learning - Special issue on inductive transfer
A Theory of Program Size Formally Identical to Information Theory
Journal of the ACM (JACM)
Intra-Option Learning about Temporally Abstract Actions
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Advances in evolutionary computing
Optimal Ordered Problem Solver
Machine Learning
Formal Theory of Creativity, Fun, and Intrinsic Motivation (1990–2010)
IEEE Transactions on Autonomous Mental Development
Resource-bounded machines are motivated to be effective, efficient, and curious
AGI'13 Proceedings of the 6th international conference on Artificial General Intelligence
Hi-index | 0.00 |
Like a scientist or a playing child, PowerPlay (Schmidhuber, 2011) not only learns new skills to solve given problems, but also invents new interesting problems by itself. By design, it continually comes up with the fastest to find, initially novel, but eventually solvable tasks. It also continually simplifies or compresses or speeds up solutions to previous tasks. Here we describe first experiments with PowerPlay. A self-delimiting recurrent neural network SLIM RNN (Schmidhuber, 2012) is used as a general computational problem solving architecture. Its connection weights can encode arbitrary, self-delimiting, halting or non-halting programs affecting both environment (through effectors) and internal states encoding abstractions of event sequences. Our PowerPlay-driven SLIM RNN learns to become an increasingly general solver of self-invented problems, continually adding new problem solving procedures to its growing skill repertoire. Extending a recent conference paper (Srivastava, Steunebrink, Stollenga, & Schmidhuber, 2012), we identify interesting, emerging, developmental stages of our open-ended system. We also show how it automatically self-modularizes, frequently re-using code for previously invented skills, always trying to invent novel tasks that can be quickly validated because they do not require too many weight changes affecting too many previous tasks.