A multimodal labeling interface for wearable computing

  • Authors:
  • Shanqing Li;Yunde Jia

  • Affiliations:
  • Beijing institute of technology, Beijing, China;Beijing institute of technology, Beijing, China

  • Venue:
  • Proceedings of the 15th international conference on Intelligent user interfaces
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Under wearable environments, it is not convenient to label an object with portable keyboards and mice. This paper presents a multimodal labeling interface to solve this problem with natural and efficient operations. Visual and audio modalities cooperate with each other: an object is encircled by visual tracking of a pointing gesture, and meanwhile its name is obtained by speech recognition. In this paper, we propose a concept of virtual touchpad based on stereo vision techniques. With the touchpad, the object encircling task is achieved by drawing a closed curve on a transparent blackboard. The touch events and movements of a pointing gesture are robustly detected for natural gesture interactions. The experimental results demonstrate the efficiency and usability of our multimodal interface.