Scalable Vision-based Gesture Interaction for Cluster-driven High Resolution Display Systems

  • Authors:
  • Xun Luo;Robert V. Kenyon

  • Affiliations:
  • Office of the Chief Scientist R&DQualcomm Inc. e-mail: xun.luo@ieee.org;Electronic Visualization Laboratory University of Illinois at Chicago e-mail: kenyon@uic.edu

  • Venue:
  • VR '09 Proceedings of the 2009 IEEE Virtual Reality Conference
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a coordinated ensemble of scalable computing techniques to accelerate a number of key tasks needed for vision-based gesture interaction, by using the cluster driving a large display system. A hybrid strategy that partitions the scanning task of a frame image by both region and scale is proposed. Based on this hybrid strategy, a novel data structure called a scanning tree is designed to organize the computing nodes. The level of effectiveness of the proposed solution was tested by incorporating it into a gesture interface controlling a ultra-high-resolution tiled display wall.