LipMouse: novel multimodal human-computer interaction interface

  • Authors:
  • Piotr Dalka;Andrzej Czyzewski

  • Affiliations:
  • Gdansk University of Technology;Gdansk University of Technology

  • Venue:
  • SIGGRAPH '09: Posters
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

The main goal of each HCI application is to make working with a computer as natural, intuitive and effective as possible. One of the main areas of applications of new human-computer interfaces is making possible to use computers for people with permanent or temporal motor disabilities in an efficient way. There are two main types of such solutions [Aggarwal and Cai 1999]. The first group utilizes devices mounted directly on the user's body. Applications in the second group are contactless and they use remote sensors only, therefore they are much more comfortable for a user. Amongst contactless solutions, vision-based human-computer interfaces are the most promising ones. They utilize cameras and image processing algorithms to detect signs and gestures made by a user and execute configured actions. The most common vision-based applications employ eye and hand tracking [Shin and Chun 2007].