Intelligence without representation
Artificial Intelligence
The nature of statistical learning theory
The nature of statistical learning theory
Understanding intelligence
Neural Networks - Special issue on organisation of computation in brain-like systems
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Support vector machine active learning with applications to text classification
The Journal of Machine Learning Research
On Intelligence
A fast learning algorithm for deep belief nets
Neural Computation
Robotics and Autonomous Systems
Steps to a Cyber-Physical Model of Networked Embodied Anticipatory Behavior
Anticipatory Behavior in Adaptive Learning Systems
Open-ended evolutionary robotics: an information theoretic approach
PPSN'10 Proceedings of the 11th international conference on Parallel problem solving from nature: Part I
Concept formation using incremental Gaussian mixture models
CIARP'10 Proceedings of the 15th Iberoamerican congress conference on Progress in pattern recognition, image analysis, computer vision, and applications
Hi-index | 0.00 |
The purpose of this paper is to outline a new formulation of statistical learning that will be more useful and relevant to the field of robotics. The primary motivation for this new perspective is the mismatch between the form of data assumed by current statistical learning algorithms, and the form of data that is actually generated by robotic systems. Specifically, robotic systems generate a vast unlabeled data stream, while most current algorithms are designed to handle limited numbers of discrete, labeled, independent and identically distributed samples. We argue that there is only one meaningful unsupervised learning process that can be applied to a vast data stream: adaptive compression. The compression rate can be used to compare different techniques, and statistical models obtained through adaptive compression should also be useful for other tasks.