Robust Linear and Support Vector Regression
IEEE Transactions on Pattern Analysis and Machine Intelligence
An L1 estimation algorithm with degeneracy and linear constraints
Computational Statistics & Data Analysis
Voting method for the detection of subpixel flow field
Pattern Recognition Letters
Object-oriented software for quadratic programming
ACM Transactions on Mathematical Software (TOMS)
The ℓ1 oblique procrustes problem
Statistics and Computing
A compact cooperative recurrent neural network for computing general constrained L1norm estimators
IEEE Transactions on Signal Processing
Medial set, boundary, and topology of random point sets
Proceedings of the 11th international conference on Theoretical foundations of computer vision
Design of recurrent neural networks for solving constrained least absolute deviation problems
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Relationships between a linear l1 estimation problem and the Huber M-estimator problem can be easily established by their dual formulations. The least norm solution of a linear programming problem studied by Mangasarian and Meyer [SIAM J. Control Optim., 17 (1979), pp. 745--752] provides a key link between the dual problems. Based on the dual formulations, we establish a local linearity property of the Huber M-estimators with respect to the tuning parameter $\gamma$ and prove that the solution set of the Huber M-estimator problem is Lipschitz continuous with respect to perturbations of the tuning parameter $\gamma$. As a consequence, the set of the linear l1 estimators is the limit of the set of the Huber M-estimators as $\gamma\to 0+. Thus, the Huber M-estimator problem has many solutions for small tuning parameter $\gamma$ if the linear l1 estimation problem has multiple solutions. A recursive version of Madsen and Nielsen's algorithm [SIAM J. Optim., 3 (1993), pp. 223--235] based on computation of the Huber M-estimator is proposed for finding a linear l1 estimator.