On the convergence of the LMS algorithm with a rank-deficient input autocorrelation matrix

  • Authors:
  • D. C. McLernon;M. M. Lara;A. G. Orozco-Lugo

  • Affiliations:
  • Institute of Integrated Information Systems, School of Electronic and Electrical Engineering, University of Leeds, Leeds LS2 9JT, UK;Centre for Research and Advanced Studies of IPN, Communications Section, Av. IPN No. 2508, Colonia San Pedro Zacatenco, CP. 07360, México D.F., Mexico;Centre for Research and Advanced Studies of IPN, Communications Section, Av. IPN No. 2508, Colonia San Pedro Zacatenco, CP. 07360, México D.F., Mexico

  • Venue:
  • Signal Processing
  • Year:
  • 2009

Quantified Score

Hi-index 0.08

Visualization

Abstract

In all books and papers on adaptive filtering, the input autocorrelation matrix R"x"x is always considered positive definite and hence the theoretical Wiener-Hopf normal equations (R"x"xh=r"x"d) have a unique solution h=h"o"p"t (''there is only a single global optimum'', [B. Widrow, S. Stearns, Adaptive Signal Processing, Prentice-Hall, 1985, p. 21]) due to the invertibility of R"x"x (i.e., it is full-rank). But what if R"x"x is positive semi-definite and not full-rank? In this case the Wiener-Hopf normal equations are still consistent but with an infinite number of possible solutions. Now it is well known that the filter coefficients of the least mean square (LMS), stochastic gradient algorithm, converge (in the mean) to the unique Wiener-Hopf solution (h"o"p"t) when R"x"x is full-rank. In this paper, we will show that even when R"x"x is not full-rank it is still possible to predict the (convergence) behaviour of the LMS algorithm based upon knowledge of R"x"x, r"x"d and the initial conditions of the filter coefficients.