From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose
IEEE Transactions on Pattern Analysis and Machine Intelligence
Non-Linear Dimensionality Reduction
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Dimensionality Reduction for Supervised Learning with Reproducing Kernel Hilbert Spaces
The Journal of Machine Learning Research
Stochastic Optimization (Scientific Computation)
Stochastic Optimization (Scientific Computation)
Regression on manifolds using kernel dimension reduction
Proceedings of the 24th international conference on Machine learning
Deep bottleneck classifiers in supervised dimension reduction
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part III
Hi-index | 0.01 |
Dimension reduction for regression (DRR) deals with the problem of finding for high-dimensional data such low-dimensional representations, which preserve the ability to predict a target variable. We propose doing DRR using a neural network with a low-dimensional "bottleneck" layer. While the network is trained for regression, the bottleneck learns a low-dimensional representation for the data. We compare our method to Covariance Operator Inverse Regression (COIR), which has been reported to perform well compared to many other DRR methods. The bottleneck network compares favorably with COIR: it is applicable to larger data sets, it is less sensitive to tuning parameters and it gives better results on several real data sets.