Touch2Annotate: generating better annotations with less human effort on multi-touch interfaces

  • Authors:
  • Yang Chen;Jing Yang;Scott Barlowe;Dong H. Jeong

  • Affiliations:
  • University of North Carolina at Charlotte, Charlotte, NC, USA;University of North Carolina at Charlotte, Charlotte, NC, USA;University of North Carolina at Charlotte, Charlotte, NC, USA;University of North Carolina at Charlotte, Charlotte, NC, USA

  • Venue:
  • CHI '10 Extended Abstracts on Human Factors in Computing Systems
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Annotation is essential for effective visual sense making. For multidimensional data, most existing annotation approaches require users to manually type notes to record the semantic meaning of their findings. They require high effort from multi-touch interface users since these users often experience low typing speeds and high typing errors. To lower the typing effort and improve the quality of the generated annotations, we propose a new approach that semi-automatically generates annotations with rich semantic meanings on multidimensional visualizations. A working prototype of this approach, named Touch2Annotate, has been implemented and used on a tabletop. We present a scenario of using Touch2Annotate to demonstrate its effectiveness.