Adaptive filtering with binary reinforcement

  • Authors:
  • A. Gersho

  • Affiliations:
  • -

  • Venue:
  • IEEE Transactions on Information Theory
  • Year:
  • 2006

Quantified Score

Hi-index 754.84

Visualization

Abstract

Recently there has been increased interest in high speed adaptive filtering where the usual stochastic gradient or least mean-square (LMS) algorithm is replaced with the simpler algorithm where adaptation is guided only by the polarity of the error signal. In this paper the convergence of this binary reinforcement (BR) algorithm is proved under the usual independence assumption, and the surprising observation is made that, unlike the LMS algorithm, convergence occurs for any positive value of the step-size parameter. While the stochastic gradient algorithm attempts to minimize a mean-square error cost function, the binary reinforcement algorithm in fact attempts to minimize a mean-absolute error cost function. It is proved for the binary reinforcement algorithm that the tap weight vector converges in distribution to a random vector that is suitably concentrated about the optimal value based on a least mean-absolute error cost function. For a sufficiently small step size, the expected cost of the asymptotic weight vector can be made as close as desired to thc minimum cost attain,ed by the optimum weight vector.