Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Interactive motion generation from examples
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Interactive control of avatars animated with human motion data
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Planning biped locomotion using motion capture data and probabilistic roadmaps
ACM Transactions on Graphics (TOG)
Fast Synthetic Vision, Memory, and Learning Models for Virtual Humans
CA '99 Proceedings of the Computer Animation
Bottom-Up Visual Attention for Virtual Human Animation
CASA '03 Proceedings of the 16th International Conference on Computer Animation and Social Agents (CASA 2003)
Precomputing avatar behavior from human motion data
SCA '04 Proceedings of the 2004 ACM SIGGRAPH/Eurographics symposium on Computer animation
Behavior planning for character animation
Proceedings of the 2005 ACM SIGGRAPH/Eurographics symposium on Computer animation
Tree-Based Batch Mode Reinforcement Learning
The Journal of Machine Learning Research
Interactive learning of mappings from visual percepts to actions
ICML '05 Proceedings of the 22nd international conference on Machine learning
High speed obstacle avoidance using monocular vision and reinforcement learning
ICML '05 Proceedings of the 22nd international conference on Machine learning
Machine Learning
Learning to move autonomously in a hostile world
SIGGRAPH '05 ACM SIGGRAPH 2005 Sketches
Fat graphs: constructing an interactive character with continuous controls
Proceedings of the 2006 ACM SIGGRAPH/Eurographics symposium on Computer animation
Precomputed search trees: planning for interactive goal-driven animation
Proceedings of the 2006 ACM SIGGRAPH/Eurographics symposium on Computer animation
Proceedings of the 2007 symposium on Interactive 3D graphics and games
Modeling embodied visual behaviors
ACM Transactions on Applied Perception (TAP)
Responsive characters from motion fragments
ACM SIGGRAPH 2007 papers
Near-optimal character animation with continuous control
ACM SIGGRAPH 2007 papers
Construction and optimal search of interpolated motion graphs
ACM SIGGRAPH 2007 papers
ACM SIGGRAPH Asia 2009 papers
Closed-loop learning of visual control policies
Journal of Artificial Intelligence Research
Reinforcement learning: a survey
Journal of Artificial Intelligence Research
Real-time planning for parameterized human motion
Proceedings of the 2008 ACM SIGGRAPH/Eurographics Symposium on Computer Animation
Proceedings of the 2008 ACM SIGGRAPH/Eurographics Symposium on Computer Animation
Achieving good connectivity in motion graphs
Proceedings of the 2008 ACM SIGGRAPH/Eurographics Symposium on Computer Animation
Biped navigation in rough environments using on-board sensing
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
Learning behavior styles with inverse reinforcement learning
ACM SIGGRAPH 2010 papers
A synthetic-vision based steering approach for crowd simulation
ACM SIGGRAPH 2010 papers
Character animation in two-player adversarial games
ACM Transactions on Graphics (TOG)
Motion fields for interactive character locomotion
ACM SIGGRAPH Asia 2010 papers
Space-time planning with parameterized locomotion controllers
ACM Transactions on Graphics (TOG)
Hi-index | 0.00 |
We present a novel approach to real-time character animation that allows a character to move autonomously based on vision input. By allowing the character to "see" the environment directly using depth perception, we can skip the manual design phase of parameterizing the state space in a reinforcement learning framework. In previous work, this is done manually since finding a minimal set of parameters for describing a character's environment is crucial for efficient learning. Learning from raw vision input, however, suffers from the "curse of dimensionality", which we avoid by introducing a hierarchical state model and a novel regression algorithm. We demonstrate that our controllers allow a character to navigate and survive in environments containing arbitrarily shaped obstacles, which is hard to achieve with existing reinforcement learning frameworks.