Maximum-Likelihood Estimation, the CramÉr–Rao Bound, and the Method of Scoring With Parameter Constraints

  • Authors:
  • T.J. Moore;B.M. Sadler;R.J. Kozick

  • Affiliations:
  • Army Res. Lab., Adelphi;-;-

  • Venue:
  • IEEE Transactions on Signal Processing
  • Year:
  • 2008

Quantified Score

Hi-index 35.69

Visualization

Abstract

Maximum-likelihood (ML) estimation is a popular approach to solving many signal processing problems. Many of these problems cannot be solved analytically and so numerical techniques such as the method of scoring are applied. However, in many scenarios, it is desirable to modify the ML problem with the inclusion of additional side information. Often this side information is in the form of parametric constraints, which the ML estimate (MLE) must now satisfy. We unify the asymptotic constrained ML (CML) theory with the constrained Cramer-Rao bound (CCRB) theory by showing the CML estimate (CMLE) is asymptotically efficient with respect to the CCRB. We also generalize the classical method of scoring using the CCRB to include the constraints, satisfying the constraints after each iterate. Convergence properties and examples verify the usefulness of the constrained scoring approach. As a particular example, an alternative and more general CMLE is developed for the complex parameter linear model with linear constraints. A novel proof of the efficiency of this estimator is provided using the CCRB.