Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
How to solve the N bit encoder problem with just two hidden units
Neural Computation
A novelty detection approach to classification
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 1
Learning the parts of objects by auto-association
Neural Networks
Unsupervised learning in neural computation
Theoretical Computer Science - Natural computing
MLP in layer-wise form with applications to weight decay
Neural Computation
One-class document classification via Neural Networks
Neurocomputing
Deep learning via semi-supervised embedding
Proceedings of the 25th international conference on Machine learning
Justifying and generalizing contrastive divergence
Neural Computation
Learning Deep Architectures for AI
Foundations and Trends® in Machine Learning
The Journal of Machine Learning Research
Hi-index | 0.00 |
A common misperception within the neural network community is that even with nonlinearities in their hidden layer, autoassociators trained with backpropagation are equivalent to linear methods such as principal component analysis (PCA). Our purpose is to demonstrate that nonlinear autoassociators actually behave differently from linear methods and that they can outperform these methods when used for latent extraction, projection, and classification. While linear autoassociators emulate PCA, and thus exhibit a flat or unimodal reconstruction error surface, autoassociators with nonlinearities in their hidden layer learn domains by building error reconstruction surfaces that, depending on the task, contain multiple local valleys. This interpolation bias allows nonlinear autoassociators to represent appropriate classifications of nonlinear multimodal domains, in contrast to linear autoassociators, which are inappropriate for such tasks. In fact, autoassociators with hidden unit nonlinearities can be shown to perform nonlinear classification and nonlinear recognition.