Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Mapping a manifold of perceptual observations
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Kernel PCA and de-noising in feature spaces
Proceedings of the 1998 conference on Advances in neural information processing systems II
Think globally, fit locally: unsupervised learning of low dimensional manifolds
The Journal of Machine Learning Research
Learning a kernel matrix for nonlinear dimensionality reduction
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Learning subjective representations for planning
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
Unsupervised learning of image manifolds by semidefinite programming
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
Unsupervised Learning of Image Manifolds by Semidefinite Programming
International Journal of Computer Vision
Map building without localization by dimensionality reduction techniques
Proceedings of the 24th international conference on Machine learning
Deep learning from temporal coherence in video
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
AAAI'06 proceedings of the 21st national conference on Artificial intelligence - Volume 2
An introduction to nonlinear dimensionality reduction by maximum variance unfolding
AAAI'06 proceedings of the 21st national conference on Artificial intelligence - Volume 2
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Construction of approximation spaces for reinforcement learning
The Journal of Machine Learning Research
Hi-index | 0.00 |
Dimensionality reduction is the problem of finding a low-dimensional representation of high-dimensional input data. This paper examines the case where additional information is known about the data. In particular, we assume the data are given in a sequence with action labels associated with adjacent data points, such as might come from a mobile robot. The goal is a variation on dimensionality reduction, where the output should be a representation of the input data that is both low-dimensional and respects the actions (i.e., actions correspond to simple transformations in the output representation). We show how this variation can be solved with a semidefinite program. We evaluate the technique in a synthetic, robot-inspired domain, demonstrating qualitatively superior representations and quantitative improvements on a data prediction task.