Learning factorial codes by predictability minimization
Neural Computation
An introduction to Kolmogorov complexity and its applications (2nd ed.)
An introduction to Kolmogorov complexity and its applications (2nd ed.)
The Speed Prior: A New Simplicity Measure Yielding Near-Optimal Computable Predictions
COLT '02 Proceedings of the 15th Annual Conference on Computational Learning Theory
Facial beauty and fractal geometry
Facial beauty and fractal geometry
What''s interesting?
Advances in evolutionary computing
Optimal Ordered Problem Solver
Machine Learning
Universal Artificial Intelligence: Sequential Decisions Based On Algorithmic Probability
Universal Artificial Intelligence: Sequential Decisions Based On Algorithmic Probability
Neural Computation
Reinforcement learning: a survey
Journal of Artificial Intelligence Research
Completely self-referential optimal reinforcement learners
ICANN'05 Proceedings of the 15th international conference on Artificial neural networks: formal models and their applications - Volume Part II
Sequential neural text compression
IEEE Transactions on Neural Networks
Anticipatory Behavior in Adaptive Learning Systems
AGI'11 Proceedings of the 4th international conference on Artificial general intelligence
Generative art and evolutionary refinement
EvoCOMNET'10 Proceedings of the 2010 international conference on Applications of Evolutionary Computation - Volume Part II
Curiosity: From psychology to computation
ACM Computing Surveys (CSUR)
Hi-index | 0.02 |
I postulate that human or other intelligent agents function or should function as follows. They store all sensory observations as they come-the data is 'holy.' At any time, given some agent's current coding capabilities, part of the data is compressible by a short and hopefully fast program / description / explanation / world model. In the agent's subjective eyes, such data is more regular and more beautiful than other data. It is well-known that knowledge of regularity and repeatability may improve the agent's ability to plan actions leading to external rewards. In absence of such rewards, however, known beauty is boring. Then interestingness becomes the first derivative of subjective beauty: as the learning agent improves its compression algorithm, formerly apparently random data parts become subjectively more regular and beautiful. Such progress in data compression is measured and maximized by the curiosity drive: create action sequences that extend the observation history and yield previously unknown / unpredictable but quickly learnable algorithmic regularity. I discuss how all of the above can be naturally implemented on computers, through an extension of passive unsupervised learning to the case of active data selection: we reward a general reinforcement learner (with access to the adaptive compressor) for actions that improve the subjective compressibility of the growing data. An unusually large compression breakthrough deserves the name discovery. The creativity of artists, dancers, musicians, pure mathematicians can be viewed as a by-product of this principle. Several qualitative examples support this hypothesis.