A possibility for implementing curiosity and boredom in model-building neural controllers
Proceedings of the first international conference on simulation of adaptive behavior on From animals to animats
Computational Intelligence for Modelling, Control and Automation '99
Computational Intelligence for Modelling, Control and Automation '99
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
The MAXQ Method for Hierarchical Reinforcement Learning
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
R-max - a general polynomial time algorithm for near-optimal reinforcement learning
The Journal of Machine Learning Research
Least-squares policy iteration
The Journal of Machine Learning Research
Tree-Based Batch Mode Reinforcement Learning
The Journal of Machine Learning Research
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Higher Coordination With Less Control-A Result of Information Maximization in the Sensorimotor Loop
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
Impoverished empowerment: 'meaningful' action sequence generation through bandwidth limitation
ECAL'09 Proceedings of the 10th European conference on Advances in artificial life: Darwin meets von Neumann - Volume Part II
All else being equal be empowered
ECAL'05 Proceedings of the 8th European conference on Advances in Artificial Life
Evolving spatiotemporal coordination in a modular robotic system
SAB'06 Proceedings of the 9th international conference on From Animals to Animats: simulation of Adaptive Behavior
Hierarchical task decomposition through symbiosis in reinforcement learning
Proceedings of the 14th annual conference on Genetic and evolutionary computation
Hi-index | 0.00 |
This article develops generalizations of empowerment to continuous states. Empowerment is a recently introduced information-theoretic quantity motivated by hypotheses about the efficiency of the sensorimotor loop in biological organisms, but also from considerations stemming from curiosity-driven learning. Empowerment measures, for agentâ聙聰environment systems with stochastic transitions, how much influence an agent has on its environment, but only that influence that can be sensed by the agent sensors. It is an information-theoretic generalization of joint controllability (influence on environment) and observability (measurement by sensors) of the environment by the agent, both controllability and observability being usually defined in control theory as the dimensionality of the control/observation spaces. Earlier work has shown that empowerment has various interesting and relevant properties, for example, it allows us to identify salient states using only the dynamics, and it can act as intrinsic reward without requiring an external reward. However, in this previous work empowerment was limited to the case of small-scale and discrete domains and furthermore state transition probabilities were assumed to be known. The goal of this article is to extend empowerment to the significantly more important and relevant case of continuous vector-valued state spaces and initially unknown state transition probabilities. The continuous state space is addressed by Monte Carlo approximation; the unknown transitions are addressed by model learning and prediction for which we apply Gaussian processes regression with iterated forecasting. In a number of well-known continuous control tasks we examine the dynamics induced by empowerment and include an application to exploration and online model learning.