Efficient Pattern Recognition Using a New Transformation Distance
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Modeling the manifolds of images of handwritten digits
IEEE Transactions on Neural Networks
A re-weighting strategy for improving margins
Artificial Intelligence
Neural learning by geometric integration of reduced 'rigid-body' equations
Journal of Computational and Applied Mathematics
A Theory for Learning by Weight Flow on Stiefel-Grassman Manifold
Neural Computation
Multi-prototype support vector machine
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
A simple additive re-weighting strategy for improving margins
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 2
Proceedings of the 2009 conference on Computational Intelligence and Bioengineering: Essays in Memory of Antonina Starita
Hi-index | 0.00 |
To overcome the problem of invariant pattern recognition, Simard, LeCun, and Denker (1993) proposed a successful nearest-neighbor approach based on tangent distance, attaining state-of-the-art accuracy. Since this approach needs great computational and memory effort, Hastie, Simard, and Säckinger (1995) proposed an algorithm (HSS) based on singular value decomposition (SVD), for the generation of nondiscriminant tangent models. In this article we propose a different approach, based on a gradientdescent constructive algorithm, called TD-Neuron, that develops discriminant models. We present as well comparative results of our constructive algorithm versus HSS and learning vector quantization (LVQ) algorithms. Specifically, we tested the HSS algorithm using both the original version based on the two-sided tangent distance and a new version based on the one-sided tangent distance. Empirical results over the NIST-3 database show that the TD-Neuron is superior to both SVD- and LVQ-based algorithms, since it reaches a better trade-off between error and rejection.