The nature of statistical learning theory
The nature of statistical learning theory
Principal component neural networks: theory and applications
Principal component neural networks: theory and applications
Artificial Intelligence Review - Special issue on lazy learning
Locally Weighted Learning for Control
Artificial Intelligence Review - Special issue on lazy learning
Local dimensionality reduction
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Statistical Learning for Humanoid Robots
Autonomous Robots
Supervised dimension reduction of intrinsically low-dimensional data
Neural Computation
Locally Weighted Projection Regression: Incremental Real Time Learning in High Dimensional Space
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Laplacian Eigenmaps for dimensionality reduction and data representation
Neural Computation
Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
Learning a kernel matrix for nonlinear dimensionality reduction
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Learning visuomotor transformations for gaze-control and grasping
Biological Cybernetics
Incremental Online Learning in High Dimensions
Neural Computation
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Unsupervised learning of a kinematic arm model
ICANN/ICONIP'03 Proceedings of the 2003 joint international conference on Artificial neural networks and neural information processing
Robust recursive least squares learning algorithm for principal component analysis
IEEE Transactions on Neural Networks
Dynamics model abstraction scheme using radial basis functions
Journal of Control Science and Engineering - Special issue on Dynamic Neural Networks for Model-Free Control and Identification
Hi-index | 0.01 |
Locally-weighted regression is a computationally-efficient technique for non-linear regression. However, for high-dimensional data, this technique becomes numerically brittle and computationally too expensive if many local models need to be maintained simultaneously. Thus, local linear dimensionality reduction combined with locally-weighted regression seems to be a promising solution. In this context, we review linear dimensionality-reduction methods, compare their performance on non-parametric locally-linear regression, and discuss their ability to extend to incremental learning. The considered methods belong to the following three groups: (1) reducing dimensionality only on the input data, (2) modeling the joint input-output data distribution, and (3) optimizing the correlation between projection directions and output data. Group 1 contains principal component regression (PCR); group 2 contains principal component analysis (PCA) in joint input and output space, factor analysis, and probabilistic PCA; and group 3 contains reduced rank regression (RRR) and partial least squares (PLS) regression. Among the tested methods, only group 3 managed to achieve robust performance even for a non-optimal number of components (factors or projection directions). In contrast, group 1 and 2 failed for fewer components since these methods rely on the correct estimate of the true intrinsic dimensionality. In group 3, PLS is the only method for which a computationally-efficient incremental implementation exists. Thus, PLS appears to be ideally suited as a building block for a locally-weighted regressor in which projection directions are incrementally added on the fly.