System identification: theory for the user
System identification: theory for the user
Robust regression and outlier detection
Robust regression and outlier detection
Total Least Norm Formulation and Solution for Structured Problems
SIAM Journal on Matrix Analysis and Applications
Structured Total Least Norm for Nonlinear Problems
SIAM Journal on Matrix Analysis and Applications
Formulation and solution of structured total least norm problemsfor parameter estimation
IEEE Transactions on Signal Processing
Convex Quadratic Approximation
Computational Optimization and Applications
Hi-index | 0.00 |
It has been known for many years that a robust solution to an overdetermined system of linear equations Ax ≈ b is obtained by minimizing the L1 norm of the residual error. A correct solution x to the linear system can often be obtained in this way, in spite of large errors (outliers) in some elements of the (m × n) matrix A and the data vector b. This is in contrast to a least squares solution, where even one large error will typically cause a large error in x. In this paper we give necessary and sufficient conditions that the correct solution is obtained when there are some errors in A and b. Based on the sufficient condition, it is shown that if k rows of [A b] contain large errors, the correct solution is guaranteed if (m − n)/n ≥ 2k/σ, where σ 0, is a lower bound of singular values related to A. Since m typically represents the number of measurements, this inequality shows how many data points are needed to guarantee a correct solution in the presence of large errors in some of the data. This inequality is, in fact, an upper bound, and computational results are presented, which show that the correct solution will be obtained, with high probability, for much smaller values of m − n.