Editorial: Errors in the Variables, Unobserved Heterogeneity, and Other Ways of Hiding Statistical Error

  • Authors:
  • Steven M. Shugan

  • Affiliations:
  • University of Florida, Warrington College of Business, 201B Bryan Hall, P.O. Box 117155, Gainesville, Florida 32611

  • Venue:
  • Marketing Science
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

One research function is proposing new scientific theories; another is testing the falsifiable predictions of those theories. Eventually, sufficient observations reveal valid predictions. For the impatient, behold statistical methods, which attribute inconsistent predictions to either faulty data (e.g., measurement error) or faulty theories. Testing theories, however, differs from estimating unknown parameters in known relationships. When testing theories, it is sufficiently dangerous to cure inconsistencies by adding observed explanatory variables (i.e., beyond the theory), let alone unobserved explanatory variables. Adding ad hoc explanatory variables mimics experimental controls when experiments are impractical. Assuming unobservable variables is different, partly because realizations of unobserved variables are unavailable for validating estimates. When different statistical assumptions about error produce dramatically different conclusions, we should doubt the theory, the data, or both. Theory tests should be insensitive to assumptions about error, particularly adjustments for error from unobserved variables. These adjustments can fallaciously inflate support for wrong theories, partly by implicitly under-weighting observations inconsistent with the theory. Inconsistent estimates often convey an important message---the data are inconsistent with the theory! Although adjustments for unobserved variables and ex post information are extraordinarily useful when estimating known relationships, when testing theories, requiring researchers to make these adjustments is inappropriate.