Note on "Comparison of model selection for regression" by Vladimir Cherkassky and Yunqian Ma

  • Authors:
  • Trevor Hastie;Rob Tibshirani;Jerome Friedman

  • Affiliations:
  • Department of Statistics, Stanford University, Stanford, CA;Department of Statistics, Stanford University, Stanford, CA;Department of Statistics, Stanford University, Stanford, CA

  • Venue:
  • Neural Computation
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

While Cherkassky and Ma (2003) raise some interesting issues in comparing techniques for model selection, their article appears to be written largely in protest of comparisons made in our book, Elements of Statistical Learning (2001). Cherkassky and Ma feel that we falsely represented the structural risk minimization (SRM) method, which they defend strongly here.In a two-page section of our book (pp. 212-213), we made an honest attempt to compare the SRM method with two related techniques, Aikaike information criterion (AIC) and Bayesian information criterion (BIC). Apparently, we did not apply SRM in the optimal way. We are also accused of using contrived examples, designed to make SRM look bad.Alas, we did introduce some careless errors in our original simulation-- errors that were corrected in the second and subsequent printings. Some of these errors were pointed out to us by Cherkassky and Ma (we supplied them with our source code), and as a result we replaced the assessment "SRM performs poorly overall" with a more moderate "the performance of SRM is mixed" (p. 212). These and other corrections can be seen in the errata section on-line at http://www-stat.stanford.edu/ElemStatLearn.