Information processing in dynamical systems: foundations of harmony theory
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Semi-supervised learning of compact document representations with deep networks
Proceedings of the 25th international conference on Machine learning
Deep learning via semi-supervised embedding
Proceedings of the 25th international conference on Machine learning
Exploring Strategies for Training Deep Neural Networks
The Journal of Machine Learning Research
Distance Metric Learning for Large Margin Nearest Neighbor Classification
The Journal of Machine Learning Research
A Deep Non-linear Feature Mapping for Large-Margin kNN Classification
ICDM '09 Proceedings of the 2009 Ninth IEEE International Conference on Data Mining
Artificial neural networks for feature extraction and multivariate data projection
IEEE Transactions on Neural Networks
Dimension reduction for regression with bottleneck neural networks
IDEAL'10 Proceedings of the 11th international conference on Intelligent data engineering and automated learning
Hi-index | 0.00 |
Deep autoencoder networks have successfully been applied in unsupervised dimension reduction. The autoencoder has a "bottleneck" middle layer of only a few hidden units, which gives a low dimensional representation for the data when the full network is trained to minimize reconstruction error. We propose using a deep bottlenecked neural network in supervised dimension reduction. Instead of trying to reproduce the data, the network is trained to perform classification. Pretraining with restricted Boltzmann machines is combined with supervised finetuning. Finetuning with supervised cost functions has been done, but with cost functions that scale quadratically. Training a bottleneck classifier scales linearly, but still gives results comparable to or sometimes better than two earlier supervised methods.