A Bayesian Computer Vision System for Modeling Human Interactions
IEEE Transactions on Pattern Analysis and Machine Intelligence
Proceedings of the 9th international conference on Multimodal interfaces
Modeling speaker behavior: a comparison of two approaches
IVA'12 Proceedings of the 12th international conference on Intelligent Virtual Agents
Hi-index | 0.00 |
Robots need to effectively use multimodal behaviors, including speech, gaze, and gestures, in support of their users to achieve intended interaction goals, such as improved task performance. This proposed research concerns designing effective multimodal behaviors for robots to interact with humans using a data-driven approach. In particular, probabilistic graphical models (PGMs) are used to model the interdependencies among multiple behavioral channels and generate complexly contingent multimodal behaviors for robots to facilitate human-robot interaction. This data-driven approach not only allows the investigation of hidden and temporal relationships among behavioral channels but also provides a holistic perspective on how multimodal behaviors as a whole might shape interaction outcomes. Three studies are proposed to evaluate the proposed data-driven approach and to investigate the dynamics of multimodal behaviors and interpersonal interaction. This research will contribute to the multimodal interaction community in theoretical, methodological, and practical aspects.