Building a Task Language for Segmentation and Recognition of User Input to Cooperative Manipulation Systems

  • Authors:
  • C. Sean Hundtofte;Gregory D. Hager;Allison M. Okamura

  • Affiliations:
  • -;-;-

  • Venue:
  • HAPTICS '02 Proceedings of the 10th Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present the results of using Hidden Markov Models (HMMs) for automatic segmentation and recognition of user motions. Previous work on recognition of user intent with man/machine interfaces has used task-level HMMs with a single hidden state for each sub-task. In contrast, many speech recognition systems employ HMMs at the phoneme level, and use a network of HMMs to model words. We analogously make use of multi-state, continuous HMMs to model action at the "gesteme" level, and a network of HMMs to describe a task or activity. As a result, we are able to create a "task language" that is used to model and segment two different tasks performed with a human-machine cooperative manipulation system. Tests were performed using force and position data recorded from an instrument held simultaneously by a robot and human operator. Experimental results show a recognition accuracy exceeding 85%. The resulting information could be used for intelligent command of virtual and teleoperated environments, and implementation of contextually appropriate virtual fixtures for dynamic operator assistance while executing complex tasks.