Reduced-rank adaptive filtering using Krylov subspace
EURASIP Journal on Applied Signal Processing
Recursive and fast recursive capon spectral estimators
EURASIP Journal on Applied Signal Processing
High-resolution source localization algorithm based on the conjugate gradient
EURASIP Journal on Advances in Signal Processing
IEEE Transactions on Signal Processing
Wireless Personal Communications: An International Journal
IEEE Transactions on Signal Processing
Asilomar'09 Proceedings of the 43rd Asilomar conference on Signals, systems and computers
IEEE Transactions on Signal Processing
A class of constrained adaptive beamforming algorithms based on uniform linear arrays
IEEE Transactions on Signal Processing
Adaptive constrained constant modulus algorithm based on auxiliary vector filtering for beamforming
IEEE Transactions on Signal Processing
Multiuser CDMA signal extraction
MILCOM'06 Proceedings of the 2006 IEEE conference on Military communications
International Journal of Communication Systems
Hi-index | 35.70 |
Statistical conditional optimization criteria lead to the development of an iterative algorithm that starts from the matched filter (or constraint vector) and generates a sequence of filters that converges to the minimum-variance-distortionless-response (MVDR) solution for any positive definite input autocorrelation matrix. Computationally, the algorithm is a simple, noninvasive, recursive procedure that avoids any form of explicit autocorrelation matrix inversion, decomposition, or diagonalization. Theoretical analysis reveals basic properties of the algorithm and establishes formal convergence. When the input autocorrelation matrix is replaced by a conventional sample-average (positive definite) estimate, the algorithm effectively generates a sequence of MVDR filter estimators; the bias converges rapidly to zero and the covariance trace rises slowly and asymptotically to the covariance trace of the familiar sample-matrix-inversion (SMI) estimator. In fact, formal convergence of the estimator sequence to the SMI estimate is established. However, for short data records, it is the early, nonasymptotic elements of the generated sequence of estimators that offer favorable bias covariance balance and are seen to outperform in mean-square estimation error, constraint-LMS, RLS-type, orthogonal multistage decomposition, as well as plain and diagonally loaded SMI estimates. An illustrative interference suppression example is followed throughout this presentation