Learning factorial codes by predictability minimization
Neural Computation
Feature extraction through LOCOCODE
Neural Computation
Training products of experts by minimizing contrastive divergence
Neural Computation
Object Recognition from Local Scale-Invariant Features
ICCV '99 Proceedings of the International Conference on Computer Vision-Volume 2 - Volume 2
Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis
ICDAR '03 Proceedings of the Seventh International Conference on Document Analysis and Recognition - Volume 2
Object Recognition with Features Inspired by Visual Cortex
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 2 - Volume 02
A fast learning algorithm for deep belief nets
Neural Computation
Hierarchical Neural Networks for Image Interpretation (Lecture Notes in Computer Science)
Hierarchical Neural Networks for Image Interpretation (Lecture Notes in Computer Science)
Semilinear predictability minimization produces well-known feature detectors
Neural Computation
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Why Does Unsupervised Pre-training Help Deep Learning?
The Journal of Machine Learning Research
Evaluation of pooling operations in convolutional architectures for object recognition
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part III
Flexible, high performance convolutional neural networks for image classification
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Two
Hi-index | 0.00 |
We present a novel convolutional auto-encoder (CAE) for unsupervised feature learning. A stack of CAEs forms a convolutional neural network (CNN). Each CAE is trained using conventional on-line gradient descent without additional regularization terms. A max-pooling layer is essential to learn biologically plausible features consistent with those found by previous approaches. Initializing a CNN with filters of a trained CAE stack yields superior performance on a digit (MNIST) and an object recognition (CIFAR10) benchmark.