MixT: automatic generation of step-by-step mixed media tutorials

  • Authors:
  • Pei-Yu Chi;Sally Ahn;Amanda Ren;Mira Dontcheva;Wilmot Li;Björn Hartmann

  • Affiliations:
  • University of California, Berkeley, Berkeley, California, USA;University of California, Berkeley, Berkeley, California, USA;University of California, Berkeley, Berkeley, California, USA;Adobe Systems, San Francisco, California, USA;Adobe Systems, San Francisco, California, USA;University of California, Berkeley, Berkeley, California, USA

  • Venue:
  • Proceedings of the 25th annual ACM symposium on User interface software and technology
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Users of complex software applications often learn concepts and skills through step-by-step tutorials. Today, these tutorials are published in two dominant forms: static tutorials composed of images and text that are easy to scan, but cannot effectively describe dynamic interactions; and video tutorials that show all manipulations in detail, but are hard to navigate. We hypothesize that a mixed tutorial with static instructions and per-step videos can combine the benefits of both formats. We describe a comparative study of static, video, and mixed image manipulation tutorials with 12 participants and distill design guidelines for mixed tutorials. We present MixT, a system that automatically generates step-by-step mixed media tutorials from user demonstrations. MixT segments screencapture video into steps using logs of application commands and input events, applies video compositing techniques to focus on salient infor-mation, and highlights interactions through mouse trails. Informal evaluation suggests that automatically generated mixed media tutorials were as effective in helping users complete tasks as tutorials that were created manually.