Adaptive internal state space construction method for reinforcement learning of a real-world agent
Neural Networks - Special issue on organisation of computation in brain-like systems
Stochastic dynamic programming with factored representations
Artificial Intelligence
Letters: Spike-based cross-entropy method for reconstruction
Neurocomputing
Factored value iteration converges
Acta Cybernetica
Efficient reinforcement learning in factored MDPs
IJCAI'99 Proceedings of the 16th international joint conference on Artificial intelligence - Volume 2
Exact Matrix Completion via Convex Optimization
Foundations of Computational Mathematics
Image super-resolution via sparse representation
IEEE Transactions on Image Processing
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
Hi-index | 0.00 |
AGI relies on Markov Decision Processes, which assume deterministic states. However, such states must be learned. We propose that states are deterministic spatio-temporal chunks of observations and notice that learning of such episodic memory is attributed to the entorhinal hippocampal complex in the brain. EHC receives information from the neocortex and encodes learned episodes into neocortical memory traces thus it changes its input without changing its emerged representations. Motivated by recent results in exact matrix completion we argue that step-wise decomposition of observations into 'typical' (deterministic) and 'atypical' (stochastic) constituents is EHC's trick of learning episodic memory.