Planning and Moving in Dynamic Environments

  • Authors:
  • Sethu Vijayakumar;Marc Toussaint;Giorgios Petkos;Matthew Howard

  • Affiliations:
  • School of Informatics, University of Edinburgh, Edinburgh, UK EH8 9AB;Technical University of Berlin, Berlin, Germany 10587;School of Informatics, University of Edinburgh, Edinburgh, UK EH8 9AB;School of Informatics, University of Edinburgh, Edinburgh, UK EH8 9AB

  • Venue:
  • Creating Brain-Like Intelligence
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this chapter, we develop a new view on problems of movement control and planning from a Machine Learning perspective. In this view, decision making, control, and planning are all considered as an inference or (alternately) an information processing problem, i.e., a problem of computing a posterior distribution over unknown variables conditioned on the available information (targets, goals, constraints). Further, problems of adaptation and learning are formulated as statistical learning problems to model the dependencies between variables. This approach naturally extends to cases when information is missing, e.g., when the context or load needs to be inferred from interaction; or to the case of apprentice learning where, crucially, latent properties of the observed behavior are learnt rather than the motion copied directly. With this account, we hope to address the long-standing problem of designing adaptive control and planning systems that can flexibly be coupled to multiple sources of information (be they of purely sensory nature or higher-level modulations such as task and constraint information) and equally formulated on any level of abstraction (motor control variables or symbolic representations). Recent advances in Machine Learning provide a coherent framework for these problems.