Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
The Journal of Machine Learning Research
An analysis of Laplacian methods for value function approximation in MDPs
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Learning Representation and Control in Markov Decision Processes: New Frontiers
Foundations and Trends® in Machine Learning
Fast Spectral Clustering with Random Projection and Sampling
MLDM '09 Proceedings of the 6th International Conference on Machine Learning and Data Mining in Pattern Recognition
Fast density-weighted low-rank approximation spectral clustering
Data Mining and Knowledge Discovery
Hi-index | 0.00 |
The core computational step in spectral learning - finding the projection of a function onto the eigenspace of a symmetric operator, such as a graph Laplacian - generally incurs a cubic computational complexity O(N3). This paper describes the use of Lanczos eigenspace projections for accelerating spectral projections, which reduces the complexity to O(nTop + n2N) operations, where n is the number of distinct eigenvalues, and Top is the complexity of mUltiplying T by a vector. This approach is based on diagonalizing the restriction of the operator to the Krylov space spanned by the operator and a projected function. Even further savings can be accrued by constructing an approximate Lanczos tridiagonal representation of the Krylov-space restricted operator. A key novelty of this paper is the use of Krylov-subspace modulated Lanczos acceleration for multi-resolution wavelet analysis. A challenging problem of learning to control a robot arm is used to test the proposed approach.