Probabilistic Visual Learning for Object Representation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Dimension reduction by local principal component analysis
Neural Computation
Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Geometry and invariance in kernel based methods
Advances in kernel methods
Prior knowledge in support vector kernels
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
The FERET Evaluation Methodology for Face-Recognition Algorithms
IEEE Transactions on Pattern Analysis and Machine Intelligence
Training Invariant Support Vector Machines
Machine Learning
Principal Manifolds and Probabilistic Subspaces for Visual Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Efficient Pattern Recognition Using a New Transformation Distance
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Incorporating Invariances in Support Vector Learning Machines
ICANN 96 Proceedings of the 1996 International Conference on Artificial Neural Networks
Unified Subspace Analysis for Face Recognition
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Knowledge discovery approach to automated cardiac SPECT diagnosis
Artificial Intelligence in Medicine
IEEE Transactions on Pattern Analysis and Machine Intelligence
Classification of gene-expression data: The manifold-based metric learning way
Pattern Recognition
Visual object recognition using probabilistic kernel subspace similarity
Pattern Recognition
Probabilistic tangent subspace method for multiuser detection
ICAPR'05 Proceedings of the Third international conference on Advances in Pattern Recognition - Volume Part I
ICIC'05 Proceedings of the 2005 international conference on Advances in Intelligent Computing - Volume Part II
Hi-index | 0.01 |
Tangent Distance (TD) is one classical method for invariant pattern classification. However, conventional TD need pre-obtain tangent vectors, which is difficult except for image objects. This paper extends TD to more general pattern classification tasks. The basic assumption is that tangent vectors can be approximately represented by the pattern variations. We propose three probabilistic subspace models to encode the variations: the linear subspace, nonlinear subspace, and manifold subspace models. These three models are addressed in a unified view, namely Probabilistic Tangent Subspace (PTS). Experiments show that PTS can achieve promising classification performance in non-image data sets.