Developing Domain-Specific Gesture Recognizers for Smart Diagram Environments

  • Authors:
  • Adrian Bickerstaffe;Aidan Lane;Bernd Meyer;Kim Marriott

  • Affiliations:
  • Monash University, Clayton, Victoria, Australia 3800;Monash University, Clayton, Victoria, Australia 3800;Monash University, Clayton, Victoria, Australia 3800;Monash University, Clayton, Victoria, Australia 3800

  • Venue:
  • Graphics Recognition. Recent Advances and New Opportunities
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Computer understanding of visual languages in pen-based environments requires a combination of lexical analysis in which the basic tokens are recognized from hand-drawn gestures and syntax analysis in which the structure is recognized. Typically, lexical analysis relies on statistical methods while syntax analysis utilizes grammars. The two stages are not independent: contextual information provided by syntax analysis is required for lexical disambiguation. Previous research into visual language recognition has focussed on syntax analysis while relatively little research has been devoted to lexical analysis and its integration with syntax analysis. This paper describes GestureLab, a tool designed for building domain-specific gesture recognizers, and its integration with Cider, a grammar engine that uses GestureLab recognizers and parses visual languages. Recognizers created with GestureLab perform probabilistic lexical recognition with disambiguation occurring during parsing based on contextual syntactic information. Creating domain-specific gesture recognizers is not a simple task. It requires significant amounts of experimentation and training with large gesture corpora to determine a suitable set of features and classifier algorithm. GestureLab supports such experimentation and facilitates collaboration by allowing corpora to be shared via remote databases.