Invariant kernel functions for pattern analysis and machine learning
Machine Learning
Deformation Models for Image Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Improving efficiency and effectiveness of the image distortion model
Pattern Recognition Letters
Local Subspace Classifier with Transform-Invariance for Image Classification
IEICE - Transactions on Information and Systems
A visual approach to sketched symbol recognition
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Proceedings of the 2011 Joint Workshop on Multilingual OCR and Analytics for Noisy Unstructured Text Data
Sparse patch-histograms for object classification in cluttered images
DAGM'06 Proceedings of the 28th conference on Pattern Recognition
FIRE in ImageCLEF 2005: combining content-based image retrieval with textual information retrieval
CLEF'05 Proceedings of the 6th international conference on Cross-Language Evalution Forum: accessing Multilingual Information Repositories
Improving a discriminative approach to object recognition using image patches
PR'05 Proceedings of the 27th DAGM conference on Pattern Recognition
PR'05 Proceedings of the 27th DAGM conference on Pattern Recognition
FIRE – flexible image retrieval engine: ImageCLEF 2004 evaluation
CLEF'04 Proceedings of the 5th conference on Cross-Language Evaluation Forum: multilingual Information Access for Text, Speech and Images
Image retrieval and annotation using maximum entropy
CLEF'06 Proceedings of the 7th international conference on Cross-Language Evaluation Forum: evaluation of multilingual and multi-modal information retrieval
Hi-index | 0.00 |
We evaluate different two-dimensional non-linear deformation models for handwritten character recognition. Starting from a true two-dimensional model, we derive pseudo-two-dimensional and zero-order deformation models. Experiments show that it is most important to include suitable representations of the local image context of each pixel to increase performance. With these methods, we achieve very competitive results across five different tasks, in particular 0.5% error rate on the MNIST task.