Vision-Based Multimodal Human Computer Interface Based on Parallel Tracking of Eye and Hand Motion

  • Authors:
  • Gihan Shin;Junchul Chun

  • Affiliations:
  • -;-

  • Venue:
  • ICCIT '07 Proceedings of the 2007 International Conference on Convergence Information Technology
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a vision-based multimodal human computer interface system using eye and hand motion tracking. Conventional vision-based human computer interface use eye or hand motion tracking individually. However, the proposed vision-based virtual interface is integrating the function of the motion tracking of eye blinking and hand gesture with the function of their recognition as a virtual interface. The proposed virtual multimodal interface system provides vision-based mechanism to communicate between human and computer system rather tahn using conventional human computer interface. For motion tracking and recognition of eye and hand gesture we exploite optical flow method and template matching. In order to minimize the error to detecte and track the specific human features caused by light variation, the enhancement of each frame is performed by histogram equalization and max-min normalization. For eye and hand region detection we use HT skin color which is nonparametric model and robust to light variation. While tracking the positions of hand and eye using optical flow method, predefined hand gesture and eye blinking are recognized by template matching. In the experiment, we apply the developed interface to control the motion of 3D models developed in Open GL environments. From experiments we can show that the proposed interface can effectively subsitute the role of existing interface device such as a mouse.