Bidirectional associative memories
IEEE Transactions on Systems, Man and Cybernetics
Information processing in dynamical systems: foundations of harmony theory
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Slow feature analysis: unsupervised learning of invariances
Neural Computation
Training products of experts by minimizing contrastive divergence
Neural Computation
Modeling mental navigation in scenes with multiple objects
Neural Computation
A fast learning algorithm for deep belief nets
Neural Computation
Journal of Cognitive Neuroscience
Spatial transformations in the parietal cortex using basis functions
Journal of Cognitive Neuroscience
Visibility transition planning for dynamic camera control
Proceedings of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer Animation
Autoregressive model of the hippocampal representation of events
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Hi-index | 0.00 |
Numerous single-unit recording studies have found mammalian hippocampal neurons that fire selectively for the animal's location in space, independent of its orientation. The population of such neurons, commonly known as place cells, is thought to maintain an allocentric, or orientation-independent, internal representation of the animal's location in space, as well as mediating long-term storage of spatial memories. The fact that spatial information from the environment must reach the brain via sensory receptors in an inherently egocentric, or viewpoint-dependent, fashion leads to the question of how the brain learns to transform egocentric sensory representations into allocentric ones for long-term memory storage. Additionally, if these long-term memory representations of space are to be useful in guiding motor behavior, then the reverse transformation, from allocentric to egocentric coordinates, must also be learned. We propose that orientation-invariant representations can be learned by neural circuits that follow two learning principles: minimization of reconstruction error and maximization of representational temporal inertia. Two different neural network models are presented that adhere to these learning principles, the first by direct optimization through gradient descent and the second using a more biologically realistic circuit based on the restricted Boltzmann machine (Hinton, 2002; Smolensky, 1986). Both models lead to orientation-invariant representations, with the latter demonstrating place-cell-like responses when trained on a linear track environment.