Multimodal segmentation of object manipulation sequences with product models

  • Authors:
  • Alexandra Barchunova;Robert Haschke;Mathias Franzius;Helge Ritter

  • Affiliations:
  • Research Institute of Cognition and Robotics, Bielefeld University, Bielefeld, Germany;Neuroinformatics, Bielefeld University, Bielefeld, Germany;Honda Research Institute Europe, Offenbach/Main, Germany;Neuroinformatics, Bielefeld University, Bielefeld, Germany

  • Venue:
  • ICMI '11 Proceedings of the 13th international conference on multimodal interfaces
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we propose an approach for unsupervised segmentation of continuous object manipulation sequences into semantically differing subsequences. The proposed method estimates segment borders based on an integrated consideration of three modalities (tactile feedback, hand posture, audio) yielding robust and accurate results in a single pass. To this end, a Bayesian approach originally applied by Fearnhead to segment one-dimensional time series data -- is extended to allow an integrated segmentation of multi-modal sequences. We propose a joint product model which combines modality-specific likelihoods to model segments. Weight parameters control the influence of each modality within the joint model. We discuss the relevance of all modalities based on an evaluation of the temporal and structural correctness of segmentation results obtained from various weight combinations.