Matrix analysis
Common principal components & related multivariate models
Common principal components & related multivariate models
Introduction to statistical pattern recognition (2nd ed.)
Introduction to statistical pattern recognition (2nd ed.)
Matrix computations (3rd ed.)
A Multilinear Singular Value Decomposition
SIAM Journal on Matrix Analysis and Applications
On the Best Rank-1 and Rank-(R1,R2,. . .,RN) Approximation of Higher-Order Tensors
SIAM Journal on Matrix Analysis and Applications
Database-friendly random projections
PODS '01 Proceedings of the twentieth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems
On the Best Rank-1 Approximation of Higher-Order Supersymmetric Tensors
SIAM Journal on Matrix Analysis and Applications
Orthogonal Tensor Decompositions
SIAM Journal on Matrix Analysis and Applications
Experiments with Random Projection
UAI '00 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
Higher-Order Web Link Analysis Using Multilinear Algebra
ICDM '05 Proceedings of the Fifth IEEE International Conference on Data Mining
Generalized Low Rank Approximations of Matrices
Machine Learning
Algorithms for portfolio management based on the Newton method
ICML '06 Proceedings of the 23rd international conference on Machine learning
Beyond streams and graphs: dynamic tensor analysis
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
Simultaneous modelling of the Cholesky decomposition of several covariance matrices
Journal of Multivariate Analysis
Tensor Decompositions and Applications
SIAM Review
Hi-index | 0.00 |
We consider the problem of finding a suitable common low dimensional subspace for accurately representing a given set of covariance matrices. With one covariance matrix, this is principal component analysis (PCA). For multiple covariance matrices, we term the problem Common Component Analysis (CCA). While CCA can be posed as a tensor decomposition problem, standard approaches to tensor decompositions have two critical issues: (i) tensor decomposition methods are iterative and rely on the initialization; (ii) for a given level of approximation error, it is difficult to choose a suitable low dimensionality. In this paper, we present a detailed analysis of CCA that yields an effective initialization and iterative algorithms for the problem. The proposed methodology has provable approximation guarantees w.r.t. the global maximum and also allows one to choose the dimensionality for a given level of approximation error. We also establish conditions under which the methodology will achieve the global maximum. We illustrate the effectiveness of the proposed method through extensive experiments on synthetic data as well as on two real stock market datasets, where major financial events can be visualized in low dimensions.