VICs: A modular HCI framework using spatiotemporal dynamics

  • Authors:
  • Guangqi Ye;J. Corso;Darius Burschka;D. Hager

  • Affiliations:
  • The Johns Hopkins University, Computational Interaction and Robotics Laboratory, 3400 N. Charles St., MD 21218, Baltimoire, USA;The Johns Hopkins University, Computational Interaction and Robotics Laboratory, 3400 N. Charles St., MD 21218, Baltimoire, USA;The Johns Hopkins University, Computational Interaction and Robotics Laboratory, 3400 N. Charles St., MD 21218, Baltimoire, USA;The Johns Hopkins University, Computational Interaction and Robotics Laboratory, 3400 N. Charles St., MD 21218, Baltimoire, USA

  • Venue:
  • Machine Vision and Applications
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

Many vision-based human-computer interaction systems are based on the tracking of user actions. Examples include gaze tracking, head tracking, finger tracking, etc. In this paper, we present a framework that employs no user tracking; instead, all interface components continuously observe and react to changes within a local neighborhood. More specifically, components expect a predefined sequence of visual events called visual interface cues (VICs). VICs include color, texture, motion, and geometric elements, arranged to maximize the veridicality of the resulting interface element. A component is executed when this stream of cues has been satisfied. We present a general architecture for an interface system operating under the VIC-based HCI paradigm and then focus specifically on an appearance-based system in which a hidden Markov model (HMM) is employed to learn the gesture dynamics. Our implementation of the system successfully recognizes a button push with a 96% success rate.