Convergence and steady-state analysis of the normalized least mean fourth algorithm
Digital Signal Processing
Adaptive mixed-norm filtering algorithm based on S αSG noise model
Digital Signal Processing
A class of stochastic gradient algorithms with exponentiated error cost functions
Digital Signal Processing
A noise constrained least mean fourth (NCLMF) adaptive algorithm
Signal Processing
Information theoretic learning with adaptive kernels
Signal Processing
Channel equalization using simplified least mean-fourth algorithm
Digital Signal Processing
Mean-square stability of the Normalized Least-Mean Fourth algorithm for white Gaussian inputs
Digital Signal Processing
An online AUC formulation for binary classification
Pattern Recognition
KIMEL: A kernel incremental metalearning algorithm
Signal Processing
Hi-index | 754.84 |
New steepest descent algorithms for adaptive filtering and have been devised which allow error minimization in the mean fourth and mean sixth, etc., sense. During adaptation, the weights undergo exponential relaxation toward their optimal solutions. Time constants have been derived, and surprisingly they turn out to be proportional to the time constants that would have been obtained if the steepest descent least mean square (LMS) algorithm of Widrow and Hoff had been used. The new gradient algorithms are insignificantly more complicated to program and to compute than the LMS algorithm. Their general form isW_{j+1} = W_{j} + 2 mu K epsilon_{j}^{2K-1}X_{j},whereW_{j}is the present weight vector,W_{j+1}is the next weight vector,epsilon_{j}is the present error,X_{j}is the present input vector,muis a constant controlling stability and rate of convergence, and2Kis the exponent of the error being minimized. Conditions have been derived for weight-vector convergence of the mean and of the variance for the new gradient algorithms. The behavior of the least mean fourth (LMF) algorithm is of special interest. In comparing this algorithm to the LMS algorithm, when both are set to have exactly the same time constants for the weight relaxation process, the LMF algorithm, under some circumstances, will have a substantially lower weight noise than the LMS algorithm. It is possible, therefore, that a minimum mean fourth error algorithm can do a better job of least squares estimation than a mean square error algorithm. This intriguing concept has implications for all forms of adaptive algorithms, whether they are based on steepest descent or otherwise.