A model for notification systems evaluation—assessing user goals for multitasking activity

  • Authors:
  • D. Scott McCrickard;C. M. Chewar;Jacob P. Somervell;Ali Ndiwalana

  • Affiliations:
  • Virginia Polytechnic Institute and State University, Blacksburg, VA;Virginia Polytechnic Institute and State University, Blacksburg, VA;Virginia Polytechnic Institute and State University, Blacksburg, VA;Virginia Polytechnic Institute and State University, Blacksburg, VA

  • Venue:
  • ACM Transactions on Computer-Human Interaction (TOCHI)
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

Addressing the need to tailor usability evaluation methods (UEMs) and promote effective reuse of HCI knowledge for computing activities undertaken in divided-attention situations, we present the foundations of a unifying model that can guide evaluation efforts for notification systems. Often implemented as ubiquitous systems or within a small portion of the traditional desktop, notification systems typically deliver information of interest in a parallel, multitasking approach, extraneous or supplemental to a user's attention priority. Such systems represent a difficult challenge to evaluate meaningfully. We introduce a design model of user goals based on blends of three critical parameters---interruption, reaction, and comprehension. Categorization possibilities form a logical, descriptive design space for notification systems, rooted in human information processing theory. This model allows conceptualization of distinct action models for at least eight classes of notification systems, which we describe and analyze with a human information processing model. System classification regions immediately suggest useful empirical and analytical evaluation metrics from related literature. We present a case study that demonstrates how these techniques can assist an evaluator in adapting traditional UEMs for notification and other multitasking systems. We explain why using the design model categorization scheme enabled us to generate evaluation results that are more relevant for the system redesign than the results of the original exploration done by the system's designers.