Planning interactive task for intelligent characters

  • Authors:
  • Dan Zong;Chunpeng Li;Shihong Xia;Zhaoqi Wang

  • Affiliations:
  • Advanced Computing Research Laboratory, Institute of Computing Technology, Chinese Academy of Sciences, No. 6 Kexueyuan South Road Zhongguancun, Haidian District, Beijing, China;Advanced Computing Research Laboratory, Institute of Computing Technology, Chinese Academy of Sciences, No. 6 Kexueyuan South Road Zhongguancun, Haidian District, Beijing, China;Advanced Computing Research Laboratory, Institute of Computing Technology, Chinese Academy of Sciences, No. 6 Kexueyuan South Road Zhongguancun, Haidian District, Beijing, China;Advanced Computing Research Laboratory, Institute of Computing Technology, Chinese Academy of Sciences, No. 6 Kexueyuan South Road Zhongguancun, Haidian District, Beijing, China

  • Venue:
  • Computer Animation and Virtual Worlds
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Motion planning is an important problem in character animation and interactive simulation. However, few planning methods have considered domain-specific knowledge that governs the agent's behaviors, and none of them is capable of planning the interactive task in which the agent interacts with the objects in the virtual environment. This paper presents a novel method to plan the interactive task based on Q-learning for intelligent characters. The approach can be described as a three-phase framework: data preprocessing phase, controller learning phase, and motion-synthesis phase. In the data preprocessing phase, we abstract the motion clips as high-level behaviors and construct the interactive behavior graph (IBG) to define the interactive capabilities of the agent in terms of interactive features. For the controller training phase, with IBG, Q-learning algorithm is employed to train the control policy in the discrete domain with interactive features. In the motion-synthesis phase, the optimal motion sequences can be generated by following the policy to accomplish the interactive task finally. The experimental results demonstrate that the uniform framework can generate reasonable and realistic motion sequences to plan interactive task in complex environment. Copyright © 2012 John Wiley & Sons, Ltd.