Simulating Artificial Neural Networks on Parallel Architectures
Computer - Special issue: neural computing: companion issue to Spring 1996 IEEE Computational Science & Engineering
Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis
ICDAR '03 Proceedings of the Seventh International Conference on Document Analysis and Recognition - Volume 2
Hierarchical Neural Networks for Image Interpretation (Lecture Notes in Computer Science)
Hierarchical Neural Networks for Image Interpretation (Lecture Notes in Computer Science)
Synergistic Face Detection and Pose Estimation with Energy-Based Models
The Journal of Machine Learning Research
Exploring Strategies for Training Deep Neural Networks
The Journal of Machine Learning Research
Learning methods for generic object recognition with invariance to pose and lighting
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
Hi-index | 0.00 |
Training convolutional neural networks (CNNs) on large sets of high-resolution images is too computationally intense to be performed on commodity CPUs. Such architectures, however, achieve state-of-theart results on low-resolution machine vision tasks such as recognition of handwritten characters. We have adapted the inherent multi-level parallelism of CNNs for Nvidia's CUDA GPU architecture to accelerate the training by two orders of magnitude. This dramatic speedup permits to apply CNN architectures to pattern recognition tasks on datasets with high-resolution natural images.