Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
IEEE Transactions on Pattern Analysis and Machine Intelligence
An introduction to variable and feature selection
The Journal of Machine Learning Research
Handwritten digit classification using higher order singular value decomposition
Pattern Recognition
Kernel PCA for novelty detection
Pattern Recognition
Sparse eigen methods by D.C. programming
Proceedings of the 24th international conference on Machine learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Sparse principal component analysis via regularized low rank matrix approximation
Journal of Multivariate Analysis
A unified tensor framework for face recognition
Pattern Recognition
Shape feature extraction and description based on tensor scale
Pattern Recognition
Tensor Decompositions and Applications
SIAM Review
A tensorial framework for color images
Pattern Recognition Letters
Unsupervised feature selection for multi-cluster data
Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining
Sparse unsupervised dimensionality reduction algorithms
ECML PKDD'10 Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part I
A survey of multilinear subspace learning for tensor data
Pattern Recognition
Integration of multi-feature fusion and dictionary learning for face recognition
Image and Vision Computing
Hi-index | 0.10 |
Principal component analysis (PCA) suffers from the fact that each principal component (PC) is a linear combination of all the original variables, thus it is difficult to interpret the results. For this reason, sparse PCA (sPCA), which produces modified PCs with sparse loadings, arises to clear away this interpretation puzzlement. However, as a result of that sPCA is limited in handling vector-represented data, if we use sPCA to reduce the dimensionality and select significant features on the real-world data which are often naturally represented by high-order tensors, we have to reshape them into vectors beforehand, and this will destroy the intrinsic data structures and induce the curse of dimensionality. Focusing on this issue, in this paper, we address the problem to find a set of critical features with multi-directional sparse loadings directly from the tensorial data, and propose a novel method called sparse high-order PCA (sHOPCA) to derive a set of sparse loadings in multiple directions. The computational complexity analysis is also presented to illustrate the efficiency of sHOPCA. To evaluate the proposed sHOPCA, we perform several experiments on both synthetic and real-world datasets, and the experimental results demonstrate the merit of sHOPCA on sparse representation of high-order tensorial data.