Matrix analysis
Adaptive filter theory
Prediction, Learning, and Games
Prediction, Learning, and Games
Logarithmic regret algorithms for online convex optimization
Machine Learning
Universal Linear Least-Squares Prediction in the Presence of Noise
SSP '07 Proceedings of the 2007 IEEE/SP 14th Workshop on Statistical Signal Processing
Discrete denoising with shifts
IEEE Transactions on Information Theory
A competitive minimax approach to robust estimation of random parameters
IEEE Transactions on Signal Processing
Linear minimax regret estimation of deterministic parameters with bounded data uncertainties
IEEE Transactions on Signal Processing
Universal Piecewise Linear Prediction Via Context Trees
IEEE Transactions on Signal Processing - Part II
Universal Switching Linear Least Squares Prediction
IEEE Transactions on Signal Processing
IEEE Transactions on Information Theory
Universal prediction of individual binary sequences in the presence of noise
IEEE Transactions on Information Theory
Universal linear least squares prediction: upper and lower bounds
IEEE Transactions on Information Theory
Universal discrete denoising: known channel
IEEE Transactions on Information Theory
Universal Filtering Via Prediction
IEEE Transactions on Information Theory
Universal Filtering Via Hidden Markov Modeling
IEEE Transactions on Information Theory
Universal Denoising of Discrete-Time Continuous-Amplitude Signals
IEEE Transactions on Information Theory
Hi-index | 35.68 |
We consider the problem of causal estimation, i.e., filtering, of a real-valued signal corrupted by zero mean, time-independent, real-valued additive noise, under the mean-squared error (MSE) criterion. We build a universal filter whose per-symbol squared error, for every bounded underlying signal, is essentially as small as that of the best finite-duration impulse response (FIR) filter of a given order. We do not assume a stochastic mechanism generating the underlying signal, and assume only that the variance of the noise is known to the filter. The regret of the expected MSE of our scheme is shown to decay as O(log n/n), where n is the length of the signal. Moreover, we present a stronger concentration result which guarantees the performance of our scheme not only in expectation, but also with high probability. Our result implies a conventional stochastic setting result, i.e., when the underlying signal is a stationary process, our filter achieves the performance of the optimal FIR filter. We back our theoretical findings with several experiments showcasing the potential merits of our universal filter in practice. Our analysis combines tools from the problems of universal filtering and competitive on-line regression.