Strong points of weak convergence: a study using RPA gradient estimation for automatic learning
Automatica (Journal of IFAC)
Hi-index | 754.84 |
The basic adaptive filtering algorithmX_{n+1}^{epsilon} = X_{n}^{epsilon} - epsilon Y_{n}(Y_{n}^{'}X_{n}^{epsilon} - psi_{n})is analyzed using the theory of weak convergence. Apart from some very special cases, the analysis is hard when done for each fixedepsilon > 0. But the weak convergence techniques are set up to provide much information for smallepsilon. The relevant facts from the theory are given. Definex^{epsilon}(cdot)byx^{epsilon}(t) = X_{n}^{epsilon}on[nepsilon, nepsilon + epsilon). Then weak (distributional) convergence of{x^{epsilon}(cdot)}and of{x^{epsilon}(cdot + t_{epsilon})}is proved under very weak assumptions, wheret_{epsilon} rightarrow inftyasepsilon rightarrow 0. The normalized errors{(X_{n}^{epsilon} - theta ) / sqrt{epsilon} }are analyzed, wherethetais a "stable" point for the "mean" algorithm. The asymptotic properties of a projection algorithm are developed, where theX_{n}^{epsilon}are truncated at each iteration, if they fall outside of a given set.