Technical Note: \cal Q-Learning
Machine Learning
SIGGRAPH '95 Proceedings of the 22nd annual conference on Computer graphics and interactive techniques
Integrated learning for interactive synthetic characters
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Interactive motion generation from examples
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Interactive control of avatars animated with human motion data
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Motion capture assisted animation: texturing and synthesis
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Planning biped locomotion using motion capture data and probabilistic roadmaps
ACM Transactions on Graphics (TOG)
Motion synthesis from annotations
ACM SIGGRAPH 2003 Papers
Snap-together motion: assembling run-time animations
ACM SIGGRAPH 2003 Papers
Motion modeling for on-line locomotion synthesis
Proceedings of the 2005 ACM SIGGRAPH/Eurographics symposium on Computer animation
Behavior planning for character animation
Proceedings of the 2005 ACM SIGGRAPH/Eurographics symposium on Computer animation
Precomputing avatar behavior from human motion data
Graphical Models - Special issue on SCA 2004
Motion patches: building blocks for virtual environments annotated with motion data
ACM SIGGRAPH 2006 Papers
Learning to move autonomously in a hostile world
SIGGRAPH '05 ACM SIGGRAPH 2005 Sketches
Fat graphs: constructing an interactive character with continuous controls
Proceedings of the 2006 ACM SIGGRAPH/Eurographics symposium on Computer animation
Precomputed search trees: planning for interactive goal-driven animation
Proceedings of the 2006 ACM SIGGRAPH/Eurographics symposium on Computer animation
Proceedings of the 2007 symposium on Interactive 3D graphics and games
Responsive characters from motion fragments
ACM SIGGRAPH 2007 papers
Near-optimal character animation with continuous control
ACM SIGGRAPH 2007 papers
Construction and optimal search of interpolated motion graphs
ACM SIGGRAPH 2007 papers
Proceedings of the 2007 ACM symposium on Virtual reality software and technology
Simulating interactions of avatars in high dimensional state space
Proceedings of the 2008 symposium on Interactive 3D graphics and games
Two-Character Motion Analysis and Synthesis
IEEE Transactions on Visualization and Computer Graphics
Interaction patches for multi-character animation
ACM SIGGRAPH Asia 2008 papers
ACM SIGGRAPH Asia 2009 papers
Reinforcement learning: a survey
Journal of Artificial Intelligence Research
Real-time planning for parameterized human motion
Proceedings of the 2008 ACM SIGGRAPH/Eurographics Symposium on Computer Animation
Learning behavior styles with inverse reinforcement learning
ACM SIGGRAPH 2010 papers
Hi-index | 0.00 |
Motion planning is an important problem in character animation and interactive simulation. However, few planning methods have considered domain-specific knowledge that governs the agent's behaviors, and none of them is capable of planning the interactive task in which the agent interacts with the objects in the virtual environment. This paper presents a novel method to plan the interactive task based on Q-learning for intelligent characters. The approach can be described as a three-phase framework: data preprocessing phase, controller learning phase, and motion-synthesis phase. In the data preprocessing phase, we abstract the motion clips as high-level behaviors and construct the interactive behavior graph (IBG) to define the interactive capabilities of the agent in terms of interactive features. For the controller training phase, with IBG, Q-learning algorithm is employed to train the control policy in the discrete domain with interactive features. In the motion-synthesis phase, the optimal motion sequences can be generated by following the policy to accomplish the interactive task finally. The experimental results demonstrate that the uniform framework can generate reasonable and realistic motion sequences to plan interactive task in complex environment. Copyright © 2012 John Wiley & Sons, Ltd.