Introduction to statistical pattern recognition (2nd ed.)
Introduction to statistical pattern recognition (2nd ed.)
Discriminant Adaptive Nearest Neighbor Classification
IEEE Transactions on Pattern Analysis and Machine Intelligence
Probabilistic Visual Learning for Object Representation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Prior knowledge in support vector kernels
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Proceedings of the 1998 conference on Advances in neural information processing systems II
Statistical Image Object Recognition using Mixture Densities
Journal of Mathematical Imaging and Vision
Efficient Pattern Recognition Using a New Transformation Distance
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Transformation Invariance in Pattern Recognition-Tangent Distance and Tangent Propagation
Neural Networks: Tricks of the Trade, this book is an outgrowth of a 1996 NIPS workshop
A Probabilistic View on Tangent Distance
Mustererkennung 2000, 22. DAGM-Symposium
Dimensionality Reduction through Sub-space Mapping for Nearest Neighbor Algorithms
ECML '00 Proceedings of the 11th European Conference on Machine Learning
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Combination of Tangent Vectors and Local Representations for Handwritten Digit Recognition
Proceedings of the Joint IAPR International Workshop on Structural, Syntactic, and Statistical Pattern Recognition
Local Tangent Distances for Classification Problems
WI-IAT '12 Proceedings of the The 2012 IEEE/WIC/ACM International Joint Conferences on Web Intelligence and Intelligent Agent Technology - Volume 01
Hi-index | 0.00 |
In many applications, modelling techniques are necessary which take into account the inherent variability of given data. In this paper, we present an approach to model class specific pattern variation based on tangent distance within a statistical framework for classification. The model is an effective means to explicitly incorporate invariance with respect to transformations that do not change class-membership like e.g. small affine transformations in the case of image objects. If no prior knowledge about the type of variability is available, it is desirable to learn the model parameters from the data. The probabilistic interpretation presented here allows us to view learning of the variational derivatives in terms of a maximum likelihood estimation problem. We present experimental results from two different real-world pattern recognition tasks, namely image object recognition and automatic speech recognition. On the US Postal Service handwritten digit recognition task, learning of variability achieves results well comparable to those obtained using specific domain knowledge. On the SieTill corpus for continuously spoken telephone line recorded German digit strings the method shows a significant improvement in comparison with a common mixture density approach using a comparable amount of parameters. The probabilistic model is well-suited to be used in the field of statistical pattern recognition and can be extended to other domains like cluster analysis.