Accelerating the convergence of the lms adaptive algorithm

  • Authors:
  • Bernard Widrow;Max Kamenetsky

  • Affiliations:
  • Stanford University;Stanford University

  • Venue:
  • Accelerating the convergence of the lms adaptive algorithm
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Adaptive filters have over the years found many applications in the field of digital signal processing. One of the most common adaptive algorithms is the Least Mean Square (LMS) algorithm, an online method that utilizes a noisy estimate of the mean square error (MSE) performance surface gradient for iteratively computing the weight vector which yields the minimum MSE solution. LMS is a simple and robust algorithm, and thus enjoys great popularity. However, its transient performance may suffer when the input autocorrelation matrix has large eigenvalue spread. In this thesis, we first establish the LMS/Newton algorithm as the optimal theoretical benchmark for reducing eigenvalue spread. We then define the excess error energy of an adaptive algorithm as the area between the transient learning curve and its steady-state asymptotic value and demonstrate that the LMS and LMS/Newton algorithms are equivalent in terms of their average excess error energies. This new result further explains the popularity of the LMS algorithm. We then examine utilizing fixed orthogonal transforms for reducing the input eigenvalue spread and explain the appropriate transform choice based on the input power spectral density. We show that both Discrete Hartley Transform LMS (DHT-LMS) and Discrete Fourier Transform LMS (DFT-LMS) attain the same asymptotic eigenvalue spread for first-order Markov inputs. Since the fastest DHT implementations are a factor of two more computationally efficient than corresponding DFT implementations, we thus show that DHT-LMS is preferred over DFT-LMS for first-order Markov inputs. However, orthogonal transform-based algorithms may perform poorly with certain narrowband inputs. We thus introduce the variable leaky LMS (VL-LMS) algorithm that performs well across a broad range of inputs. We show that the VL-LMS algorithm can reduce the eigenvalue spread without a large increase in the steady-state asymptotic MSE that plagues fixed leaky algorithms. We derive bounds on the convergence of VL-LMS and its steady-state asymptotic MSE in a simple nonstationary system identification model. Finally, we introduce the colored variable leaky LMS (CVL-LMS) algorithm that can utilize additional spectral information about the input to intelligently color the leak. Simulation results demonstrate that VL-LMS and CVL-LMS can significantly outperform LMS.