Explicit task representation based on gesture interaction

  • Authors:
  • Christian Müller-Tomfelde;Cécile Paris

  • Affiliations:
  • CSIRO - Information and Communication Technologies Centre, North Ryde, NSW, Australia;CSIRO - Information and Communication Technologies Centre, North Ryde, NSW, Australia

  • Venue:
  • MMUI '05 Proceedings of the 2005 NICTA-HCSNet Multimodal User Interaction Workshop - Volume 57
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper describes the role and the use of an explicit task representation in applications where humans interact in non-traditional computer environments using gestures. The focus lies on training and assistance applications, where the objective of the training includes implicit knowledge, e.g., motor-skills. On the one hand, these applications require a clear and transparent description of what has to be done during the interaction, while, on the other hand, they are highly interactive and multimodal. Therefore, the human computer interaction becomes modelled from the top down as a collaboration in which each participant pursues their individual goal that is stipulated by a task. In a bottom up processing, gesture recognition determines the actions of the user by applying processing on the continuous data streams from the environment. The resulting gesture or action is interpreted as the user's intention and becomes evaluated during the collaboration, allowing the system to reason about how to best provide guidance at this point. A vertical prototype based on the combination of a haptic virtual environment and a knowledge-based reasoning system is discussed and the evolvement of the task-based collaboration becomes demonstrated.