Information and Complexity in Statistical Modeling
Information and Complexity in Statistical Modeling
A Matrix Handbook for Statisticians
A Matrix Handbook for Statisticians
Asymptotic bootstrap corrections of AIC for linear regression models
Signal Processing
IEEE Transactions on Signal Processing
MML Invariant Linear Regression
AI '09 Proceedings of the 22nd Australasian Joint Conference on Advances in Artificial Intelligence
Signal processing applications of oblique projection operators
IEEE Transactions on Signal Processing
Asymptotic MAP criteria for model selection
IEEE Transactions on Signal Processing
A small sample model selection criterion based on Kullback's symmetric divergence
IEEE Transactions on Signal Processing
Cramer-Rao bounds for deterministic modal analysis
IEEE Transactions on Signal Processing
Paper: Modeling by shortest data description
Automatica (Journal of IFAC)
Fisher information and stochastic complexity
IEEE Transactions on Information Theory
Some notes on Rissanen's stochastic complexity
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
The AIC Criterion and Symmetrizing the Kullback–Leibler Divergence
IEEE Transactions on Neural Networks
Hi-index | 0.08 |
The use of the normalized maximum likelihood (NML) for model selection in Gaussian linear regression poses troubles because the normalization coefficient is not finite. The most elegant solution has been proposed by Rissanen and consists in applying a particular constraint for the data space. In this paper, we demonstrate that the methodology can be generalized, and we discuss two particular cases, namely the rhomboidal and the ellipsoidal constraints. The new findings are used to derive four NML-based criteria. For three of them which have been already introduced in the previous literature, we provide a rigorous analysis. We also compare them against five state-of-the-art selection rules by conducting Monte Carlo simulations for families of models commonly used in signal processing. Additionally, for the eight criteria which are tested, we report results on their predictive capabilities for real life data sets.