Adaptive filter theory (2nd ed.)
Adaptive filter theory (2nd ed.)
Neural Networks for Optimization and Signal Processing
Neural Networks for Optimization and Signal Processing
An L1 estimation algorithm with degeneracy and linear constraints
Computational Statistics & Data Analysis
The Linear l1 Estimator and the Huber M-Estimator
SIAM Journal on Optimization
Convex Optimization
A Dual Approach to Semidefinite Least-Squares Problems
SIAM Journal on Matrix Analysis and Applications
A novel neural network for a class of convex quadratic minimax problems
Neural Computation
IEEE Transactions on Neural Networks
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Cooperative Recurrent Neural Networks for the Constrained L1 Estimator
IEEE Transactions on Signal Processing - Part I
Extended Hopfield models for combinatorial optimization
IEEE Transactions on Neural Networks
A novel neural network for nonlinear convex programming
IEEE Transactions on Neural Networks
A neural network for a class of convex quadratic minimax problems with constraints
IEEE Transactions on Neural Networks
A novel neural network for variational inequalities with linear and nonlinear constraints
IEEE Transactions on Neural Networks
Design and analysis of a general recurrent neural network model for time-varying matrix inversion
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Solving Quadratic Programming Problems by Delayed Projection Neural Network
IEEE Transactions on Neural Networks
A Recurrent Neural Network for Hierarchical Control of Interconnected Dynamic Systems
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
A Generalized Least Absolute Deviation Method for Parameter Estimation of Autoregressive Signals
IEEE Transactions on Neural Networks
Constrained Least Absolute Deviation Neural Networks
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Quasi-Lagrangian Neural Network for Convex Quadratic Optimization
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Recurrent neural networks for solving constrained least absolute deviation (LAD) problems or L1-norm optimization problems have attracted much interest in recent years. But so far most neural networks can only deal with some special linear constraints efficiently. In this paper, two neural networks are proposed for solving LAD problems with various linear constraints including equality, two-sided inequality and bound constraints. When tailored to solve some special cases of LAD problems in which not all types of constraints are present, the two networks can yield simpler architectures than most existing ones in the literature. In particular, for solving problems with both equality and one-sided inequality constraints, another network is invented. All of the networks proposed in this paper are rigorously shown to be capable of solving the corresponding problems. The different networks designed for solving the same types of problems possess the same structural complexity, which is due to the fact these architectures share the same computing blocks and only differ in connections between some blocks. By this means, some flexibility for circuits realization is provided. Numerical simnlations are carried out to illustrate the theoretical results and compare the convergence rates of the networks.