Adaptive signal processing
Adaptive filter theory (3rd ed.)
Adaptive filter theory (3rd ed.)
Adaptive Filters: Theory and Applications
Adaptive Filters: Theory and Applications
Reduced-rank adaptive filtering using Krylov subspace
EURASIP Journal on Applied Signal Processing
Reduced-rank adaptive filtering
IEEE Transactions on Signal Processing
IEEE Transactions on Signal Processing
Hi-index | 0.08 |
In all books and papers on adaptive filtering, the input autocorrelation matrix R"x"x is always considered positive definite and hence the theoretical Wiener-Hopf normal equations (R"x"xh=r"x"d) have a unique solution h=h"o"p"t (''there is only a single global optimum'', [B. Widrow, S. Stearns, Adaptive Signal Processing, Prentice-Hall, 1985, p. 21]) due to the invertibility of R"x"x (i.e., it is full-rank). But what if R"x"x is positive semi-definite and not full-rank? In this case the Wiener-Hopf normal equations are still consistent but with an infinite number of possible solutions. Now it is well known that the filter coefficients of the least mean square (LMS), stochastic gradient algorithm, converge (in the mean) to the unique Wiener-Hopf solution (h"o"p"t) when R"x"x is full-rank. In this paper, we will show that even when R"x"x is not full-rank it is still possible to predict the (convergence) behaviour of the LMS algorithm based upon knowledge of R"x"x, r"x"d and the initial conditions of the filter coefficients.