Usability and hardcopy manuals: evaluating research designs and methods

  • Authors:
  • Barbara Mirel

  • Affiliations:
  • Illinois Institute of Technology

  • Venue:
  • SIGDOC '90 Proceedings of the 8th annual international conference on Systems documentation
  • Year:
  • 1990

Quantified Score

Hi-index 0.00

Visualization

Abstract

For the past decade, testing the usability of print software manuals has become a mature area of study, characterized by a wide range of qualitative and quantitative methods. Some of the most common methods include field observations, surveys, interviews, protocol analyses, focus groups, iterative testing, and quasi-experimental lab simulations [1]. Such diverse approaches to usability testing offer an opportunity for complementary inquiries and analyses. For example, findings from focus groups can provide key questions for experimental researchers to pursue in greater depth and with greater possibility for generalizability. Essentially, this complementary approach envisions an interaction between the academy, with its propensity toward pure, experimental research, and industry, with its more applied approaches for alpha and beta testing.Patricia Wright, a specialist in usability studies, has long argued that integrating pure and applied research is the best means for expanding our knowledge about effective document design [2; 3]. Such integration reveals both the immediately applicable aspects of effective manuals and the more theoretical boundaries in textual features that make a difference for general types of tasks, readers, and contexts of use.In order to realize the potential of conducting a conversation between pure and applied research, documentation researchers and practitioners must clearly understand the limitations that exist in the conclusions that investigators derive from specific methods of inquiry. In this article, I look solely at experimental usability tests that rely on quantitative methods of analysis. I analyze the ways in which the research designs and questions of the past ten years of experimental studies affect the strength of cumulative conclusions and the confidence we can have in those conclusions. My purpose is not to give preference to experimental research as the most important approach to usability testing. Far from it. Rather my critical review has two purposes: (1) to facilitate the dialogue between academic and industrial researchers by identifying the limits of current experimental findings; and (2) to propose research agendas and designs for future experimental usability tests that can strengthen the conclusions that such researchers offer for practical consideration.My evaluation of ten years of experimental usability studies shows that many of the conclusions of these studies are not strong enough to serve as valid, generalizable, and replicable foundations for subsequent research, be it pure or applied. These conclusions can be strengthened by designing studies that pay more attention to the sequencing and integration of related investigations and that institute better controls for sample selection, size, and composition. This article discusses my overall findings, the details of which I will develop more fully in my presentation.