Introduction to statistical pattern recognition (2nd ed.)
Introduction to statistical pattern recognition (2nd ed.)
Matrix computations (3rd ed.)
The Geometry of Algorithms with Orthogonality Constraints
SIAM Journal on Matrix Analysis and Applications
From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose
IEEE Transactions on Pattern Analysis and Machine Intelligence
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Fast Nearest-Neighbor Search in Dissimilarity Spaces
IEEE Transactions on Pattern Analysis and Machine Intelligence
Face Recognition from Long-Term Observations
ECCV '02 Proceedings of the 7th European Conference on Computer Vision-Part III
Face Recognition Using Temporal Image Sequence
FG '98 Proceedings of the 3rd. International Conference on Face & Gesture Recognition
A generalized kernel approach to dissimilarity-based classification
The Journal of Machine Learning Research
Learning over sets using kernel principal angles
The Journal of Machine Learning Research
Learning with non-positive kernels
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Feature Space Interpretation of SVMs with Indefinite Kernels
IEEE Transactions on Pattern Analysis and Machine Intelligence
Maximal margin classification for metric spaces
Journal of Computer and System Sciences - Special issue: Learning theory 2003
IEEE Transactions on Pattern Analysis and Machine Intelligence
Discriminative Learning and Recognition of Image Set Classes Using Canonical Correlations
IEEE Transactions on Pattern Analysis and Machine Intelligence
Journal of Cognitive Neuroscience
Kernel Grassmannian distances and discriminant analysis for face recognition from image sets
Pattern Recognition Letters
Canonical Stiefel quotient and its application to generic face recognition in illumination spaces
BTAS'09 Proceedings of the 3rd IEEE international conference on Biometrics: Theory, applications and systems
Nearest-neighbor search algorithms on non-Euclidean manifolds for computer vision applications
Proceedings of the Seventh Indian Conference on Computer Vision, Graphics and Image Processing
Joint dynamic sparse representation for multi-view face recognition
Pattern Recognition
Margin preserving projection for image set based face recognition
ICONIP'11 Proceedings of the 18th international conference on Neural Information Processing - Volume Part II
Image and Vision Computing
Advances in matrix manifolds for computer vision
Image and Vision Computing
Semi-intrinsic mean shift on riemannian manifolds
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part I
Age invariant face verification with relative craniofacial growth model
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part VI
Proceedings of the 27th Conference on Image and Vision Computing New Zealand
Pose-robust face recognition via sparse representation
Pattern Recognition
Face recognition in videos: a graph based modified kernel discriminant analysis
ACCV'12 Proceedings of the 11th Asian conference on Computer Vision - Volume Part I
Partial least squares regression on grassmannian manifold for emotion recognition
Proceedings of the 15th ACM on International conference on multimodal interaction
Kernel analysis on Grassmann manifolds for action recognition
Pattern Recognition Letters
Multi-local model image set matching based on domain description
Pattern Recognition
Hi-index | 0.00 |
In this paper we propose a discriminant learning framework for problems in which data consist of linear subspaces instead of vectors. By treating subspaces as basic elements, we can make learning algorithms adapt naturally to the problems with linear invariant structures. We propose a unifying view on the subspace-based learning method by formulating the problems on the Grassmann manifold, which is the set of fixed-dimensional linear subspaces of a Euclidean space. Previous methods on the problem typically adopt an inconsistent strategy: feature extraction is performed in the Euclidean space while non-Euclidean distances are used. In our approach, we treat each sub-space as a point in the Grassmann space, and perform feature extraction and classification in the same space. We show feasibility of the approach by using the Grassmann kernel functions such as the Projection kernel and the Binet-Cauchy kernel. Experiments with real image databases show that the proposed method performs well compared with state-of-the-art algorithms.