Adaptive Image Filtering and Segmentation Using Robust Estimation of Intensity
Proceedings of the Joint IAPR International Workshops on Advances in Pattern Recognition
Robust Full Bayesian Learning for Radial Basis Networks
Neural Computation
A CFAR based model order selection criterion for complex sinusoids
Signal Processing - Signal processing in UWB communications
Fingerprint image enhancement based on second directional derivative of the digital image
EURASIP Journal on Applied Signal Processing
Asymptotic bootstrap corrections of AIC for linear regression models
Signal Processing
Sinusoidal order estimation using angles between subspaces
EURASIP Journal on Advances in Signal Processing
Optimal filters for extraction and separation of periodic sources
Asilomar'09 Proceedings of the 43rd Asilomar conference on Signals, systems and computers
Optimal filter designs for separating and enhancing periodic signals
IEEE Transactions on Signal Processing
Reversible jump MCMC simulated annealing for neural networks
UAI'00 Proceedings of the Sixteenth conference on Uncertainty in artificial intelligence
Brief paper: Structure detection and parameter estimation for NARX models in a unified EM framework
Automatica (Journal of IFAC)
Hi-index | 35.69 |
The two most popular model selection rules in signal processing literature have been Akaike's (1974) criterion (AIC) and Rissanen's (1978) principle of minimum description length (MDL). These rules are similar in form in that they both consist of data and penalty terms. Their data terms are identical, but the penalties are different, MDL being more stringent toward overparameterization. AIC penalizes for each additional model parameter with an equal incremental amount of penalty, regardless of the parameter's role in the model, In most of the literature on model selection, MDL appears in a form that also suggests equal penalty for every unknown parameter. This MDL criterion, we refer to as naive MDL. In this paper, we show that identical penalization for every parameter is not appropriate and that the penalty has to depend on the model structure and type of model parameters. The approach to showing this is Bayesian, and it relies on large sample theory. We derive maximum a posteriori (MAP) rules for several different families of competing models and obtain forms that are similar to AIC and naive MDL. For some families, however, we find that the derived penalties are different. In those cases, our extensive simulations show that the MAP rule outperforms AIC and naive MDL