VICs: a modular vision-based HCI framework

  • Authors:
  • Guangqi Ye;Jason Corso;Darius Burschka;Gregory D. Hager

  • Affiliations:
  • The Johns Hopkins University, Computational Interaction and Robotics Laboratory;The Johns Hopkins University, Computational Interaction and Robotics Laboratory;The Johns Hopkins University, Computational Interaction and Robotics Laboratory;The Johns Hopkins University, Computational Interaction and Robotics Laboratory

  • Venue:
  • ICVS'03 Proceedings of the 3rd international conference on Computer vision systems
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

Many Vision-Based Human-Computer Interaction (VBHCI) systems are based on the tracking of user actions. Examples include gaze-tracking, head-tracking, finger-tracking, and so forth. In this paper, we present a framework that employs no user-tracking; instead, all interface components continuously observe and react to changes within a local image neighborhood. More specifically, components expect a pre-defined sequence of visual events called Visual Interface Cues (VICs). VICs include color, texture, motion and geometric elements, arranged to maximize the veridicality of the resulting interface element. A component is executed when this stream of cues has been satisfied. We present a general architecture for an interface system operating under the VIC-Based HCI paradigm, and then focus specifically on an appearance-based system in which a Hidden Markov Model (HMM) is employed to learn the gesture dynamics. Our implementation of the system successfully recognizes a button-push with a 96% success rate. The system operates at frame-rate on standard PCs.