Adaptive algorithms and stochastic approximations
Adaptive algorithms and stochastic approximations
Learning invariance from transformation sequences
Neural Computation
Kernel PCA and de-noising in feature spaces
Proceedings of the 1998 conference on Advances in neural information processing systems II
Nonparametric Time Series Prediction Through Adaptive ModelSelection
Machine Learning
A Theory of Learning and Generalization
A Theory of Learning and Generalization
Slow feature analysis: unsupervised learning of invariances
Neural Computation
Bounds for Linear Multi-Task Learning
The Journal of Machine Learning Research
Generalization bounds for subspace selection and hyperbolic PCA
SLSFS'05 Proceedings of the 2005 international conference on Subspace, Latent Structure and Feature Selection
On the eigenspectrum of the gram matrix and the generalization error of kernel-PCA
IEEE Transactions on Information Theory
Unsupervised slow subspace-learning from stationary processes
Theoretical Computer Science
Hi-index | 0.00 |
We propose a method of unsupervised learning from stationary, vector-valued processes. A low-dimensional subspace is selected on the basis of a criterion which rewards data-variance (like PSA) and penalizes the variance of the velocity vector, thus exploiting the short-time dependencies of the process. We prove error bounds in terms of the β-mixing coefficients and consistency for absolutely regular processes. Experiments with image recognition demonstrate the algorithms ability to learn geometrically invariant feature maps.