Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Journal of Logic, Language and Information
MML clustering of multi-state, Poisson, vonMises circular and Gaussian distributions
Statistics and Computing
Statistical and Inductive Inference by Minimum Message Length (Information Science and Statistics)
Statistical and Inductive Inference by Minimum Message Length (Information Science and Statistics)
Introduction to Data Compression, Third Edition (Morgan Kaufmann Series in Multimedia Information and Systems)
Universal Intelligence: A Definition of Machine Intelligence
Minds and Machines
Handbook of Data Compression
Measuring universal intelligence: Towards an anytime intelligence test
Artificial Intelligence
A Monte-Carlo AIXI approximation
Journal of Artificial Intelligence Research
Extending universal intelligence models with formal notion of representation
AGI'12 Proceedings of the 5th international conference on Artificial General Intelligence
Differences between kolmogorov complexity and solomonoff probability: consequences for AGI
AGI'12 Proceedings of the 5th international conference on Artificial General Intelligence
Hi-index | 0.00 |
Compression has been advocated as one of the principles which pervades inductive inference and prediction - and, from there, it has also been recurrent in definitions and tests of intelligence. However, this connection is less explicit in new approaches to intelligence. In this paper, we advocate that the notion of compression can appear again in definitions and tests of intelligence through the concepts of 'mindreading' and 'communication' in the context of multi-agent systems and social environments. Our main position is that two-part Minimum Message Length (MML) compression is not only more natural and effective for agents with limited resources, but it is also much more appropriate for agents in (co-operative) social environments than one-part compression schemes - particularly those using a posterior-weighted mixture of all available models following Solomonoff's theory of prediction. We think that the realisation of these differences is important to avoid a naive view of 'intelligence as compression' in favour of a better understanding of how, why and where (one-part or two-part, lossless or lossy) compression is needed.