A comparison of two display models for collaborative sensemaking

  • Authors:
  • Haeyong Chung;Sharon Lynn Chu;Chris North

  • Affiliations:
  • Virginia Tech, Blacksburg, VA;Virginia Tech, Blacksburg, VA;Virginia Tech, Blacksburg, VA

  • Venue:
  • Proceedings of the 2nd ACM International Symposium on Pervasive Displays
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we investigate how a distributed model of sensemaking, spread out over multiple displays and devices, impacts the sensemaking process for the individual and for the group, and whether it provides any feasible opportunities for improving the quality and efficiency of sensemaking efforts. Our study compares the use of two display models for collaborative visual analytics, one based on the model of the personal displays with shared visualization spaces and the other based on the distributed model whereby different displays can be appropriated as workspaces in a unified manner by collocated teams. Although the general sensemaking workflow did not change across the two types of systems, we observed that the system based on the distributed model enabled a more transparent interaction for collaborations, and allowed for greater 'objectification' of information. Our findings have significant implications for how future visual analytics systems can be designed to motivate effective collaborative sensemaking.