Adaptive algorithms and stochastic approximations
Adaptive algorithms and stochastic approximations
Learning invariance from transformation sequences
Neural Computation
Nonparametric Time Series Prediction Through Adaptive ModelSelection
Machine Learning
A Theory of Learning and Generalization: With Applications to Neural Networks and Control Systems
A Theory of Learning and Generalization: With Applications to Neural Networks and Control Systems
Slow feature analysis: unsupervised learning of invariances
Neural Computation
Bounds for Linear Multi-Task Learning
The Journal of Machine Learning Research
Unsupervised slow subspace-learning from stationary processes
ALT'06 Proceedings of the 17th international conference on Algorithmic Learning Theory
On the eigenspectrum of the gram matrix and the generalization error of kernel-PCA
IEEE Transactions on Information Theory
Global analysis of Oja's flow for neural networks
IEEE Transactions on Neural Networks
Hi-index | 5.23 |
We propose a method of unsupervised learning from stationary, vector-valued processes. A projection to a low-dimensional subspace is selected on the basis of an objective function which rewards data-variance and penalizes the variance of the velocity vector, thus exploiting the short-time dependencies of the process. We prove bounds on the estimation error of the objective in terms of the @b-mixing coefficients of the process. It is also shown that maximizing the objective minimizes an error bound for simple classification algorithms on a generic class of learning tasks. Experiments with image recognition demonstrate the algorithms ability to learn geometrically invariant feature maps.