Towards automated large vocabulary gesture search

  • Authors:
  • Alexandra Stefan;Haijing Wang;Vassilis Athitsos

  • Affiliations:
  • University of Texas at Arlington;University of Texas at Arlington;University of Texas at Arlington

  • Venue:
  • Proceedings of the 2nd International Conference on PErvasive Technologies Related to Assistive Environments
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper describes work towards designing a computer vision system for helping users look up the meaning of a sign. Sign lookup is treated as a video database retrieval problem. A video database is utilized that contains one or more video examples for each sign, for a large number of signs (close to 1000 in our current experiments). The emphasis of this paper is on evaluating the trade-offs between a non-automated approach, where the user manually specifies hand locations in the input video, and a fully automated approach, where hand locations are determined using a computer vision module, thus introducing inaccuracies into the sign retrieval process. We experimentally evaluate both approaches and present their respective advantages and disadvantages.