Performance of Lookahead Control Policies in the Face of Abstractions and Approximations

  • Authors:
  • Ilya Levner;Vadim Bulitko;Omid Madani;Russell Greiner

  • Affiliations:
  • -;-;-;-

  • Venue:
  • Proceedings of the 5th International Symposium on Abstraction, Reformulation and Approximation
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper explores the formulation of image interpretation as a Markov Decision Process (MDP) problem, highlighting the important assumptions in the MDP formulation. Furthermore state abstraction, value function and action approximations as well as lookahead search are presented as necessary solution methodologies. We view the task of image interpretation as a dynamic control problem where the optimal vision operator is selected responsively based on the problem solvingstate at hand. The control policy, therefore, maps problem-solving states to operators in an attempt to minimize the total problem-solving time while reliably interpretingthe image. Real world domains, like that of image interpretation, usually have incredibly large state spaces which require methods of abstraction in order to be manageable by today's information processingsystems. In addition an optimal value function (V*) used to evaluate state quality is also generally unavailable nrequiring appro ximations to be used in conjunction with state abstraction. Therefore, the performance of the system is directly related to the types of abstractions and approximations present.