Independent component analysis by general nonlinear Hebbian-like learning rules
Signal Processing - Special issue on neural networks
Independent component analysis: algorithms and applications
Neural Networks
Slow feature analysis: unsupervised learning of invariances
Neural Computation
Introduction to Autonomous Mobile Robots
Introduction to Autonomous Mobile Robots
2005 Special issue: Robust self-localisation and navigation based on hippocampal place cells
Neural Networks - Special issue: Computational theories of the functions of the hippocampus
reacTIVision: a computer-vision framework for table-based tangible interaction
Proceedings of the 1st international conference on Tangible and embedded interaction
2007 Special Issue: The cerebellum as a liquid state machine
Neural Networks
Generative Modeling of Autonomous Robots and their Environments using Reservoir Computing
Neural Processing Letters
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part I
On the Quantification of Dynamics in Reservoir Computing
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part I
Map-based navigation in mobile robots
Cognitive Systems Research
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
This work proposes a hierarchical biologically-inspired architecture for learning sensor-based spatial representations of a robot environment in an unsupervised way. The first layer is comprised of a fixed randomly generated recurrent neural network, the reservoir, which projects the input into a high-dimensional, dynamic space. The second layer learns instantaneous slowly-varying signals from the reservoir states using Slow Feature Analysis (SFA), whereas the third layer learns a sparse coding on the SFA layer using Independent Component Analysis (ICA). While the SFA layer generates non-localized activations in space, the ICA layer presents high place selectivity, forming a localized spatial activation, characteristic of place cells found in the hippocampus area of the rodent's brain. We show that, using a limited number of noisy short-range distance sensors as input, the proposed system learns a spatial representation of the environment which can be used to predict the actual location of simulated and real robots, without the use of odometry. The results confirm that the reservoir layer is essential for learning spatial representations from low-dimensional input such as distance sensors. The main reason is that the reservoir state reflects the recent history of the input stream. Thus, this fading memory is essential for detecting locations, mainly when locations are ambiguous and characterized by similar sensor readings.