A Low Power Algorithm for Sparse System Identification using Cross-Correlation
Journal of VLSI Signal Processing Systems
Set-membership proportionate affine projection algorithms
EURASIP Journal on Audio, Speech, and Music Processing
An adaptive penalized maximum likelihood algorithm
Signal Processing
A class of stochastic gradient algorithms with exponentiated error cost functions
Digital Signal Processing
Krylov-proportionate adaptive filtering techniques not limited to sparse systems
IEEE Transactions on Signal Processing
Expert mixture methods for adaptive channel equalization
ICANN/ICONIP'03 Proceedings of the 2003 joint international conference on Artificial neural networks and neural information processing
Generalized wideband cyclic MUSIC
EURASIP Journal on Advances in Signal Processing
A PNLMS algorithm with individual activation factors
IEEE Transactions on Signal Processing
Using hierarchical filters to detect sparseness in unknown channels
KES'06 Proceedings of the 10th international conference on Knowledge-Based Intelligent Information and Engineering Systems - Volume Part III
Hi-index | 35.69 |
This paper studies a class of algorithms called natural gradient (NG) algorithms. The least mean square (LMS) algorithm is derived within the NG framework, and a family of LMS variants that exploit sparsity is derived. This procedure is repeated for other algorithm families, such as the constant modulus algorithm (CMA) and decision-directed (DD) LMS. Mean squared error analysis, stability analysis, and convergence analysis of the family of sparse LMS algorithms are provided, and it is shown that if the system is sparse, then the new algorithms will converge faster for a given total asymptotic MSE. Simulations are provided to confirm the analysis. In addition, Bayesian priors matching the statistics of a database of real channels are given, and algorithms are derived that exploit these priors. Simulations using measured channels are used to show a realistic application of these algorithms