The Strength of Weak Learnability
Machine Learning
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Handwritten digit recognition with a back-propagation network
Advances in neural information processing systems 2
Integrated segmentation and recognition of hand-printed numerals
NIPS-3 Proceedings of the 1990 conference on Advances in neural information processing systems 3
A framework for the cooperation of learning algorithms
NIPS-3 Proceedings of the 1990 conference on Advances in neural information processing systems 3
The perception of multiple objects: a connectionist approach
The perception of multiple objects: a connectionist approach
The nature of statistical learning theory
The nature of statistical learning theory
LeRec: a NN/HMM hybrid for on-line handwriting recognition
Neural Computation
Distortion Invariant Object Recognition in the Dynamic Link Architecture
IEEE Transactions on Computers
Improving Performance in Neural Networks Using a Boosting Algorithm
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Global Training of Document Processing Systems Using Graph Transformer Networks
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
Training Support Vector Machines: an Application to Face Detection
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
Neural Network-Based Face Detection
CVPR '96 Proceedings of the 1996 Conference on Computer Vision and Pattern Recognition (CVPR '96)
Backpropagation applied to handwritten zip code recognition
Neural Computation
Face recognition: a convolutional neural-network approach
IEEE Transactions on Neural Networks
Face detection in low-resolution color images
ICIAR'10 Proceedings of the 7th international conference on Image Analysis and Recognition - Volume Part I
Hi-index | 0.00 |
Finding an appropriate set of features is an essential problem in the design of shape recognition systems. This paper attempts to show that for recognizing simple objects with high shape variability such as handwritten characters, it is possible, and even advantageous, to feed the system directly with minimally processed images and to rely on learning to extract the right set of features. Convolutional Neural Networks are shown to be particularly well suited to this task. We also show that these networks can be used to recognize multiple objects without requiring explicit segmentation of the objects from their surrounding. The second part of the paper presents the Graph Transformer Network model which extends the applicability of gradient-based learning to systems that use graphs to represents features, objects, and their combinations.