Normalized Cuts and Image Segmentation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Predictive low-rank decomposition for kernel methods
ICML '05 Proceedings of the 22nd international conference on Machine learning
Evaluation of Optimization Methods for Network Bottleneck Diagnosis
ICAC '07 Proceedings of the Fourth International Conference on Autonomic Computing
Robust Face Recognition via Sparse Representation
IEEE Transactions on Pattern Analysis and Machine Intelligence
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Towards Structural Sparsity: An Explicit l2/l0 Approach
ICDM '10 Proceedings of the 2010 IEEE International Conference on Data Mining
Robust and efficient subspace segmentation via least squares regression
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part VII
Robust subspace discovery via relaxed rank minimization
Neural Computation
Hi-index | 0.00 |
This paper presents the multi-subspace discovery problem and provides a theoretical solution which is guaranteed to recover the number of subspaces, the dimensions of each subspace, and the members of data points of each subspace simultaneously. We further propose a data representation model to handle noisy real world data. We develop a novel optimization approach to learn the presented model which is guaranteed to converge to global optimizers. As applications of our models, we first apply our solutions as preprocessing in a series of machine learning problems, including clustering, classification, and semisupervised learning. We found that our method automatically obtains robust data presentation which preserves the affine subspace structures of high dimensional data and generate more accurate results in the learning tasks. We also establish a robust standalone classifier which directly utilizes our sparse and low rank representation model. Experimental results indicate our methods improve the quality of data by preprocessing and the standalone classifier outperforms some state-of-the-art learning approaches.