Metrics---When and Why Nonaveraging Statistics Work

  • Authors:
  • Steven M. Shugan;Debanjan Mitra

  • Affiliations:
  • Warrington College of Business Administration, University of Florida, Gainesville, Florida 32611;Warrington College of Business Administration, University of Florida, Gainesville, Florida 32611

  • Venue:
  • Management Science
  • Year:
  • 2009

Quantified Score

Hi-index 0.01

Visualization

Abstract

Good metrics are well-defined formulae (often involving averaging) that transmute multiple measures of raw numerical performance (e.g., dollar sales, referrals, number of customers) to create informative summary statistics (e.g., average share of wallet, average customer tenure). Despite myriad uses (benchmarking, monitoring, allocating resources, diagnosing problems, explanatory variables), most uses require metrics that contain information summarizing multiple observations. On this criterion, we show empirically (with people data) that although averaging has remarkable theoretical properties, supposedly inferior nonaveraging metrics (e.g., maximum, variance) are often better. We explain theoretically (with exact proofs) and numerically (with simulations) when and why. For example, when the environment causes a correlation between observed sample sizes (e.g., number of past purchases, projects, observations) and latent underlying parameters (e.g., the likelihood of favorable outcomes), the maximum statistic is a better metric than the mean. We refer to this environmental effect as the Muth effect, which occurs when rational markets provide more opportunities (i.e., more observations) to individuals and organizations with greater innate ability. Moreover, when environments are adverse (e.g., failure-rich), nonaveraging metrics correctly overweight favorable outcomes. We refer to this environmental effect as the Anna Karenina effect, which occurs when less-favorable outcomes convey less information. These environmental effects impact metric construction, selection, and employment.