Sparse distributed memory and related models
Associative neural memories
Planning and acting in partially observable stochastic domains
Artificial Intelligence
Learning to Predict by the Methods of Temporal Differences
Machine Learning
Reinforcement learning with selective perception and hidden state
Reinforcement learning with selective perception and hidden state
Sparse distributed memory using N-of-M codes
Neural Networks
A fast learning algorithm for deep belief nets
Neural Computation
Online Sequential Prediction via Incremental Parsing: The Active LeZi Algorithm
IEEE Intelligent Systems
Hi-index | 0.00 |
Markov models have been a keystone in Artificial Intelligence for many decades. However, they remain unsatisfactory when the environment modelled is partially observable. There are pathological examples where no history of fixed length is sufficient for accurate prediction or decision making. On the other hand, working with a hidden state (like in Hidden Markov Models or Partially Observable Markov Decision Processes) has a high computational cost. In order to circumvent this problem, we suggest the use of a context-based model. Our approach replaces strict transition probabilities by influences on transitions. The method proposed provides a trade-off between a fully and partially observable model. We also discuss the capacity of our framework to model hierarchical knowledge and abstraction. Simple examples are given in order to show the advantages of the algorithm.