Using gestures to convey internal mental models and index multimedia content

  • Authors:
  • Pratik Biswas;Renate Fruchter

  • Affiliations:
  • Stanford University, Department of Electrical Engineering, 94305-4020, Stanford, CA, USA;Stanford University, Department of Civil and Environmental Engineering, 94305-4020, Stanford, CA, USA

  • Venue:
  • AI & Society
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Gestures can serve as external representations of abstract concepts which may be otherwise difficult to illustrate. Gestures often accompany verbal statement as an embodiment of mental models that augment the communication of ideas, concepts or envisioned shapes of products. A gesture is also an indicator of the subject and context of the issue under discussion. We argue that if gestures can be identified and formalized they can be used as a knowledge indexing and retrieval tool and can prove to be useful access point into unstructured digital video data. We present a methodology and a prototype, called I-Gesture that allows users to (1) define a vocabulary of gestures for a specific domain, (2) build a digital library of the gesture vocabulary, and (3) mark up entire video streams based on the predefined vocabulary for future search and retrieval of digital content from the archive. I-Gesture methodology and prototype are illustrated through scenarios where it can be utilized. The paper concludes with results of evaluation experiments with I-Gesture using a test bed of design-construction projects.