Machine Learning - Special issue on context sensitivity and concept drift
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Switching Between Two Universal Source Coding Algorithms
DCC '98 Proceedings of the Conference on Data Compression
Convex Optimization
Universal Artificial Intelligence: Sequential Decisions Based On Algorithmic Probability
Universal Artificial Intelligence: Sequential Decisions Based On Algorithmic Probability
Efficient algorithms for online convex optimization and their applications
Efficient algorithms for online convex optimization and their applications
A Monte-Carlo AIXI approximation
Journal of Artificial Intelligence Research
Logarithmic regret algorithms for online convex optimization
COLT'06 Proceedings of the 19th annual conference on Learning Theory
DCC '12 Proceedings of the 2012 Data Compression Conference
Mixing Strategies in Data Compression
DCC '12 Proceedings of the 2012 Data Compression Conference
The context-tree weighting method: basic properties
IEEE Transactions on Information Theory
Hi-index | 0.00 |
One of the key challenges in AIXI approximation is model class approximation - i.e. how to meaningfully approximate Solomonoff Induction without requiring an infeasible amount of computation? This paper advocates a bottom-up approach to this problem, by describing a number of principled ensemble techniques for approximate AIXI agents. Each technique works by efficiently combining a set of existing environment models into a single, more powerful model. These techniques have the potential to play an important role in future AIXI approximations.