Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis
ICDAR '03 Proceedings of the Seventh International Conference on Document Analysis and Recognition - Volume 2
Object Recognition with Features Inspired by Visual Cortex
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 2 - Volume 02
Hierarchical Neural Networks for Image Interpretation (Lecture Notes in Computer Science)
Hierarchical Neural Networks for Image Interpretation (Lecture Notes in Computer Science)
Semilinear predictability minimization produces well-known feature detectors
Neural Computation
Object Class Recognition and Localization Using Sparse Features with Limited Receptive Fields
International Journal of Computer Vision
Performance and Scalability of GPU-Based Convolutional Neural Networks
PDP '10 Proceedings of the 2010 18th Euromicro Conference on Parallel, Distributed and Network-based Processing
Evaluation of pooling operations in convolutional architectures for object recognition
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part III
Learning methods for generic object recognition with invariance to pose and lighting
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
Deep, big, simple neural nets for handwritten digit recognition
Neural Computation
Stacked convolutional auto-encoders for hierarchical feature extraction
ICANN'11 Proceedings of the 21th international conference on Artificial neural networks - Volume Part I
XCS-based versus UCS-based feature pattern classification system
Proceedings of the 14th annual conference on Genetic and evolutionary computation
A modular neural network architecture with concept
Neurocomputing
Hi-index | 0.00 |
We present a fast, fully parameterizable GPU implementation of Convolutional Neural Network variants. Our feature extractors are neither carefully designed nor pre-wired, but rather learned in a supervised way. Our deep hierarchical architectures achieve the best published results on benchmarks for object classification (NORB, CIFAR10) and handwritten digit recognition (MNIST), with error rates of 2.53%, 19.51%, 0.35%, respectively. Deep nets trained by simple back-propagation perform better than more shallow ones. Learning is surprisingly rapid. NORB is completely trained within five epochs. Test error rates on MNIST drop to 2.42%, 0.97% and 0.48% after 1, 3 and 17 epochs, respectively.