Handwritten digit recognition with a back-propagation network
Advances in neural information processing systems 2
Transformation Invariance in Pattern Recognition-Tangent Distance and Tangent Propagation
Neural Networks: Tricks of the Trade, this book is an outgrowth of a 1996 NIPS workshop
Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis
ICDAR '03 Proceedings of the Seventh International Conference on Document Analysis and Recognition - Volume 2
Adaptation in Statistical Pattern Recognition Using Tangent Vectors
IEEE Transactions on Pattern Analysis and Machine Intelligence
Learning to Detect Objects in Images via a Sparse, Part-Based Representation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hierarchical Part-Based Visual Object Categorization
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
Integrating Representative and Discriminative Models for Object Category Detection
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2
Neural Computation
Multiclass Object Recognition with Sparse, Localized Features
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1
Large-scale Learning with SVM and Convolutional for Generic Object Categorization
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1
Robust Object Recognition with Cortex-Like Mechanisms
IEEE Transactions on Pattern Analysis and Machine Intelligence
Robust Object Detection with Interleaved Categorization and Segmentation
International Journal of Computer Vision
Hi-index | 0.00 |
This work presents a novel method for training shift-invariant features using a Boosting framework. Features performing local convolutions followed by subsampling are used to achieve shift-invariance. Other systems using this type of features, e.g. Convolutional Neural Networks, use complex feed-forward networks with multiple layers. In contrast, the proposed system adds features one at a time using smoothing spline base classifiers. Feature training optimizes base classifier costs. Boosting sample-reweighting ensures features to be both descriptive and independent. Our system has a lower number of design parameters as comparable systems, so adapting the system to new problems is simple. Also, the stage-wise training makes it very scalable. Experimental results show the competitiveness of our approach.