From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose
IEEE Transactions on Pattern Analysis and Machine Intelligence
Analysis of Multi-stage Convex Relaxation for Sparse Regularization
The Journal of Machine Learning Research
A Singular Value Thresholding Algorithm for Matrix Completion
SIAM Journal on Optimization
Uniqueness of Low-Rank Matrix Completion by Rigidity Theory
SIAM Journal on Matrix Analysis and Applications
Robust principal component analysis?
Journal of the ACM (JACM)
Convex and Network Flow Optimization for Structured Sparsity
The Journal of Machine Learning Research
Recovering Low-Rank and Sparse Components of Matrices from Incomplete and Noisy Observations
SIAM Journal on Optimization
Learning Incoherent Sparse and Low-Rank Patterns from Multiple Tasks
ACM Transactions on Knowledge Discovery from Data (TKDD)
Foundations and Trends® in Machine Learning
De-noising by soft-thresholding
IEEE Transactions on Information Theory
Statistical modeling of complex backgrounds for foreground object detection
IEEE Transactions on Image Processing
Sparse methods for biomedical data
ACM SIGKDD Explorations Newsletter
Hi-index | 0.00 |
In many applications such as image and video processing, the data matrix often possesses simultaneously a low-rank structure capturing the global information and a sparse component capturing the local information. How to accurately extract the low-rank and sparse components is a major challenge. Robust Principal Component Analysis (RPCA) is a general framework to extract such structures. It is well studied that under certain assumptions, convex optimization using the trace norm and l1-norm can be an effective computation surrogate of the difficult RPCA problem. However, such convex formulation is based on a strong assumption which may not hold in real-world applications, and the approximation error in these convex relaxations often cannot be neglected. In this paper, we present a novel non-convex formulation for the RPCA problem using the capped trace norm and the capped l1-norm. In addition, we present two algorithms to solve the non-convex optimization: one is based on the Difference of Convex functions (DC) framework and the other attempts to solve the sub-problems via a greedy approach. Our empirical evaluations on synthetic and real-world data show that both of the proposed algorithms achieve higher accuracy than existing convex formulations. Furthermore, between the two proposed algorithms, the greedy algorithm is more efficient than the DC programming, while they achieve comparable accuracy.