Reinforcement learning in Markovian and non-Markovian environments
NIPS-3 Proceedings of the 1990 conference on Advances in neural information processing systems 3
Technical Note: \cal Q-Learning
Machine Learning
Adding temporary memory to ZCS
Adaptive Behavior
Continual learning in reinforcement environments
Continual learning in reinforcement environments
Machine Learning - Special issue on inductive transfer
Adaptive Behavior
Toward a Model of Intelligence as an Economy of Agents
Machine Learning
Learning to Predict by the Methods of Temporal Differences
Machine Learning
Properties of the Bucket Brigade
Proceedings of the 1st International Conference on Genetic Algorithms
Planning and Acting in Partially Observable Stochastic Domains
Planning and Acting in Partially Observable Stochastic Domains
Zcs: A zeroth level classifier system
Evolutionary Computation
Hi-index | 0.00 |
Unlike traditional reinforcement learning (RL), marketbased RL is in principle applicable to worlds described by partially observable Markov Decision Processes (POMDPs), where an agent needs to learn short-term memories of relevant previous events in order to execute optimal actions. Most previous work, however, has focused on reactive settings (MDPs) instead of POMDPs. Here we reimplement a recent approach to market-based RL and for the first time evaluate it in a toy POMDP setting.