Articulate: a semi-automated model for translating natural language queries into meaningful visualizations

  • Authors:
  • Yiwen Sun;Jason Leigh;Andrew Johnson;Sangyoon Lee

  • Affiliations:
  • Electronic Visualization Laboratory, University of Illinois at Chicago;Electronic Visualization Laboratory, University of Illinois at Chicago;Electronic Visualization Laboratory, University of Illinois at Chicago;Electronic Visualization Laboratory, University of Illinois at Chicago

  • Venue:
  • SG'10 Proceedings of the 10th international conference on Smart graphics
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

While many visualization tools exist that offer sophisticated functions for charting complex data, they still expect users to possess a high degree of expertise in wielding the tools to create an effective visualization. This paper presents Articulate, an attempt at a semi-automated visual analytic model that is guided by a conversational user interface to allow users to verbally describe and then manipulate what they want to see. We use natural language processing and machine learning methods to translate the imprecise sentences into explicit expressions, and then apply a heuristic graph generation algorithm to create a suitable visualization. The goal is to relieve the user of the burden of having to learn a complex user-interface in order to craft a visualization.