Understanding decision-support effectiveness: a computer simulation approach

  • Authors:
  • Jeffrey E. Kottemann;Kathleen M. Boyer-Wright;Joel F. Kincaid;Fred D. Davis

  • Affiliations:
  • Perdue School of Business, Salisbury University, Salisbury, MD;Perdue School of Business, Salisbury University, Salisbury, MD;Winton-Salem State University, Winston-Salem, NC;University of Arkansas, Fayetteville, AR

  • Venue:
  • IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans - Special section: Best papers from the 2007 biometrics: Theory, applications, and systems (BTAS 07) conference
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

The interplay between decision-making and decision-support tools has proven puzzling for many years. One of the most popular decision-support tools, what-if analysis, is no exception. Decades of empirical studies have found positive, negative, and null effects. In this paper, we contrast the marginal-analysis decision-making strategy enabled by what-if with the anchoring and adjustment decision-making strategies prevalent among unaided decision makers. By using an aggregate production planning decision task, we develop a Monte Carlo simulation to model 1000 independent what-if decision-making episodes across a myriad of conditions. Results mirror and explain seemingly contradictory findings across multiple prior experiments. Thus, this paper formalizes a simulation approach that expands the scope of previous findings regarding unaided versus what-if analysis aided decision making and suggests that relative performance is quite sensitive to task conditions. In this light, then, performance effect differences in past research are to be expected. While our analysis involves a single task context, the larger and more important point is that, even within a single task context, performance differences between unaided and aided decision making are emergent.