Parallel and distributed computation: numerical methods
Parallel and distributed computation: numerical methods
A proximal-based decomposition method for convex minimization problems
Mathematical Programming: Series A and B
Mathematical control theory: deterministic finite dimensional systems (2nd ed.)
Mathematical control theory: deterministic finite dimensional systems (2nd ed.)
A variable-penalty alternating directions method for convex optimization
Mathematical Programming: Series A and B
Atomic Decomposition by Basis Pursuit
SIAM Journal on Scientific Computing
A Weak-to-Strong Convergence Principle for Fejé-Monotone Methods in Hilbert Spaces
Mathematics of Operations Research
Alternating Projection-Proximal Methods for Convex Programming and Variational Inequalities
SIAM Journal on Optimization
A Fast Algorithm for Edge-Preserving Variational Multichannel Image Restoration
SIAM Journal on Imaging Sciences
Exact Matrix Completion via Convex Optimization
Foundations of Computational Mathematics
Interior-Point Method for Nuclear Norm Approximation with Application to System Identification
SIAM Journal on Matrix Analysis and Applications
The power of convex relaxation: near-optimal matrix completion
IEEE Transactions on Information Theory
Matrix completion from a few entries
IEEE Transactions on Information Theory
Matrix Completion from Noisy Entries
The Journal of Machine Learning Research
A Singular Value Thresholding Algorithm for Matrix Completion
SIAM Journal on Optimization
SIAM Journal on Scientific Computing
Fixed point and Bregman iterative methods for matrix rank minimization
Mathematical Programming: Series A and B
Solving Large-Scale Least Squares Semidefinite Programming by Alternating Direction Methods
SIAM Journal on Matrix Analysis and Applications
Inexact Alternating Direction Methods for Image Recovery
SIAM Journal on Scientific Computing
Alternating Direction Method for Image Inpainting in Wavelet Domains
SIAM Journal on Imaging Sciences
TILT: Transform Invariant Low-Rank Textures
International Journal of Computer Vision
On the $O(1/n)$ Convergence Rate of the Douglas-Rachford Alternating Direction Method
SIAM Journal on Numerical Analysis
Robust Video Restoration by Joint Sparse and Low Rank Matrix Approximation
SIAM Journal on Imaging Sciences
An ADM-based splitting method for separable convex programming
Computational Optimization and Applications
Computational Optimization and Applications
Advances in Computational Mathematics
Robust principal component analysis via capped norms
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
Robust image annotation via simultaneous feature and sample outlier pursuit
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
Recovering low-rank matrices from corrupted observations via the linear conjugate gradient algorithm
Journal of Computational and Applied Mathematics
A proximal parallel splitting method for minimizing sum of convex functions with linear constraints
Journal of Computational and Applied Mathematics
A simple and efficient algorithm for fused lasso signal approximator with convex loss function
Computational Statistics
Robust subspace discovery via relaxed rank minimization
Neural Computation
A Simple Prior-Free Method for Non-rigid Structure-from-Motion Factorization
International Journal of Computer Vision
Hi-index | 0.00 |
Many problems can be characterized by the task of recovering the low-rank and sparse components of a given matrix. Recently, it was discovered that this nondeterministic polynomial-time hard (NP-hard) task can be well accomplished, both theoretically and numerically, via heuristically solving a convex relaxation problem where the widely acknowledged nuclear norm and $l_1$ norm are utilized to induce low-rank and sparsity. This paper studies the recovery task in the general settings that only a fraction of entries of the matrix can be observed and the observation is corrupted by both impulsive and Gaussian noise. We show that the resulting model falls into the applicable scope of the classical augmented Lagrangian method. Moreover, the separable structure of the new model enables us to solve the involved subproblems more efficiently by splitting the augmented Lagrangian function. Hence, some splitting numerical algorithms are developed for solving the new recovery model. Some preliminary numerical experiments verify that these augmented-Lagrangian-based splitting algorithms are easily implementable and surprisingly efficient for tackling the new recovery model.