Designing effective multimodal behaviors for robots: a data-driven perspective

  • Authors:
  • Chien-Ming Huang

  • Affiliations:
  • University of Wisconsin-Madison, Madison, WI, USA

  • Venue:
  • Proceedings of the 15th ACM on International conference on multimodal interaction
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Robots need to effectively use multimodal behaviors, including speech, gaze, and gestures, in support of their users to achieve intended interaction goals, such as improved task performance. This proposed research concerns designing effective multimodal behaviors for robots to interact with humans using a data-driven approach. In particular, probabilistic graphical models (PGMs) are used to model the interdependencies among multiple behavioral channels and generate complexly contingent multimodal behaviors for robots to facilitate human-robot interaction. This data-driven approach not only allows the investigation of hidden and temporal relationships among behavioral channels but also provides a holistic perspective on how multimodal behaviors as a whole might shape interaction outcomes. Three studies are proposed to evaluate the proposed data-driven approach and to investigate the dynamics of multimodal behaviors and interpersonal interaction. This research will contribute to the multimodal interaction community in theoretical, methodological, and practical aspects.