Integrated learning for interactive synthetic characters

  • Authors:
  • Bruce Blumberg;Marc Downie;Yuri Ivanov;Matt Berlin;Michael Patrick Johnson;Bill Tomlinson

  • Affiliations:
  • The Media Lab, MIT;The Media Lab, MIT;The Media Lab, MIT;The Media Lab, MIT;The Media Lab, MIT;The Media Lab, MIT

  • Venue:
  • Proceedings of the 29th annual conference on Computer graphics and interactive techniques
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

The ability to learn is a potentially compelling and important quality for interactive synthetic characters. To that end, we describe a practical approach to real-time learning for synthetic characters. Our implementation is grounded in the techniques of reinforcement learning and informed by insights from animal training. It simplifies the learning task for characters by (a) enabling them to take advantage of predictable regularities in their world, (b) allowing them to make maximal use of any supervisory signals, and (c) making them easy to train by humans.We built an autonomous animated dog that can be trained with a technique used to train real dogs called "clicker training". Capabilities demonstrated include being trained to recognize and use acoustic patterns as cues for actions, as well as to synthesize new actions from novel paths through its motion space.A key contribution of this paper is to demonstrate that by addressing the three problems of state, action, and state-action space discovery at the same time, the solution for each becomes easier. Finally, we articulate heuristics and design principles that make learning practical for synthetic characters.