Asymptotic MAP criteria for model selection

  • Authors:
  • P.M. Djuric

  • Affiliations:
  • Dept. of Electr. Eng., State Univ. of New York, Stony Brook, NY

  • Venue:
  • IEEE Transactions on Signal Processing
  • Year:
  • 1998

Quantified Score

Hi-index 35.69

Visualization

Abstract

The two most popular model selection rules in signal processing literature have been Akaike's (1974) criterion (AIC) and Rissanen's (1978) principle of minimum description length (MDL). These rules are similar in form in that they both consist of data and penalty terms. Their data terms are identical, but the penalties are different, MDL being more stringent toward overparameterization. AIC penalizes for each additional model parameter with an equal incremental amount of penalty, regardless of the parameter's role in the model, In most of the literature on model selection, MDL appears in a form that also suggests equal penalty for every unknown parameter. This MDL criterion, we refer to as naive MDL. In this paper, we show that identical penalization for every parameter is not appropriate and that the penalty has to depend on the model structure and type of model parameters. The approach to showing this is Bayesian, and it relies on large sample theory. We derive maximum a posteriori (MAP) rules for several different families of competing models and obtain forms that are similar to AIC and naive MDL. For some families, however, we find that the derived penalties are different. In those cases, our extensive simulations show that the MAP rule outperforms AIC and naive MDL