System identification: theory for the user
System identification: theory for the user
System identification
Automatica (Journal of IFAC) - Special issue on trends in system identification
A new information-theoretic approach to signal denoising and best basis selection
IEEE Transactions on Signal Processing - Part I
Analysis of an LMS algorithm for unbiased impulse response estimation
IEEE Transactions on Signal Processing
Consistent Impulse-Response Estimation and System Realization From Noisy Data
IEEE Transactions on Signal Processing - Part I
Blind channel approximation: effective channel order determination
IEEE Transactions on Signal Processing
Exploring estimator bias-variance tradeoffs using the uniform CRbound
IEEE Transactions on Signal Processing
Comparing different approaches to model error modeling in robust identification
Automatica (Journal of IFAC)
An asymptotic property of model selection criteria
IEEE Transactions on Information Theory
The minimum description length principle in coding and modeling
IEEE Transactions on Information Theory
On denoising and best signal representation
IEEE Transactions on Information Theory
Information Theory and Mixing Least-Squares Regressions
IEEE Transactions on Information Theory
Hi-index | 35.68 |
This paper investigates the impulse response estimation of linear time-invariant (LTI) systems when only noisy finite-length input-output data of the system is available. The competing parametric candidates are the least square impulse response estimates of possibly different lengths. It is known that the presence of noise prohibits using model sets with large number of parameters as the resulting parameter estimation error can be quite large. Model selection methods acknowledge this problem, hence, they provide metrics to compare estimates in different model classes. Such metrics typically involve a combination of the available least-square output error, which decreases as the number of parameters increases, and a function that penalizes the size of the model. In this paper, we approach the model class selection problem from a different perspective that is closely related to the involved denoising problem. The method primarily focuses on estimating the parameter error in a given model class of finite order using the available least-square output error. We show that such an estimate, which is provided in terms of upper and lower bounds with certain level of confidence, contains the appropriate tradeoffs between the bias and variance of the estimation error. Consequently, these measures can be used as the basis for model comparison and model selection. Furthermore, we demonstrate how this approach reduces to the celebrated AIC method for a specific confidence level. The performance of the method as the noise variance and/or the data length varies is explored, and consistency of the approach as the data length grows is analyzed.