Multilinear Tensor-Based Non-parametric Dimension Reduction for Gait Recognition
ICB '09 Proceedings of the Third International Conference on Advances in Biometrics
Semi-supervised bilinear subspace learning
IEEE Transactions on Image Processing
Uncorrelated multilinear principal component analysis for unsupervised multilinear subspace learning
IEEE Transactions on Neural Networks
Distance approximating dimension reduction of Riemannian manifolds
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Maximum margin criterion with tensor representation
Neurocomputing
Low-resolution gait recognition
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics - Special issue on gait analysis
Colored subspace analysis: dimension reduction based on a signal's autocorrelation structure
IEEE Transactions on Circuits and Systems Part I: Regular Papers
Image clustering using local discriminant models and global integration
IEEE Transactions on Image Processing - Special section on distributed camera networks: sensing, processing, communication, and implementation
Newborn footprint recognition using subspace learning methods
ICIC'10 Proceedings of the 6th international conference on Advanced intelligent computing theories and applications: intelligent computing
A survey of multilinear subspace learning for tensor data
Pattern Recognition
Expert Systems with Applications: An International Journal
Sparse tensor embedding based multispectral face recognition
Neurocomputing
Hi-index | 0.00 |
Principal components analysis (PCA) has traditionally been utilized with data expressed in the form of 1-D vectors, but there exists much data such as gray-level images, video sequences, Gabor-filtered images and so on, that are intrinsically in the form of second or higher order tensors. For representations of image objects in their intrinsic form and order rather than concatenating all the object data into a single vector, we propose in this paper a new optimal object reconstruction criterion with which the information of a high-dimensional tensor is represented as a much lower dimensional tensor computed from projections to multiple concurrent subspaces. In each of these subspaces, correlations with respect to one of the tensor dimensions are reduced, enabling better object reconstruction performance. Concurrent subspaces analysis (CSA) is presented to efficiently learn these subspaces in an iterative manner. In contrast to techniques such as PCA which vectorize tensor data, CSA's direct use of data in tensor form brings an enhanced ability to learn a representative subspace and an increased number of available projection directions. These properties enable CSA to outperform traditional algorithms in the common case of small sample sizes, where CSA can be effective even with only a single sample per class. Extensive experiments on images of faces and digital numbers encoded as second or third order tensors demonstrate that the proposed CSA outperforms PCA-based algorithms in object reconstruction and object recognition.