Speech/Gesture Interface to a Visual-Computing Environment

  • Authors:
  • Rajeev Sharma;Michael Zeller;Vladimir I. Pavlovic;Thomas S. Huang;Zion Lo;Stephen Chu;Yunxin Zhao;James C. Phillips;Klaus Schulten

  • Affiliations:
  • -;-;-;-;-;-;-;-;-

  • Venue:
  • IEEE Computer Graphics and Applications
  • Year:
  • 2000

Quantified Score

Hi-index 0.03

Visualization

Abstract

Recent progress in 3D immersive display and virtual reality (VR) technologies has made possible many exciting applications. To fully exploit this potential requires "natural" interfaces that allow manipulating such displays without cumbersome attachments. In this article we describe using visual hand-gesture analysis and speech recognition for developing a speech/gesture interface to control a 3D display. The interface enhances an existing application, VMD, which is a VR visual computing environment for structural biology. The free-hand gestures manipulate the 3D graphical display, together with a set of speech commands. We found