Structured Threshold Policies for Dynamic Sensor Scheduling—A Partially Observed Markov Decision Process Approach

  • Authors:
  • V. Krishnamurthy;D.V. Djonin

  • Affiliations:
  • British Columbia Univ., Vancouver;-

  • Venue:
  • IEEE Transactions on Signal Processing
  • Year:
  • 2007

Quantified Score

Hi-index 35.69

Visualization

Abstract

We consider the optimal sensor scheduling problem formulated as a partially observed Markov decision process (POMDP). Due to operational constraints, at each time instant, the scheduler can dynamically select one out of a finite number of sensors and record a noisy measurement of an underlying Markov chain. The aim is to compute the optimal measurement scheduling policy, so as to minimize a cost function comprising of estimation errors and measurement costs. The formulation results in a nonstandard POMDP that is nonlinear in the information state. We give sufficient conditions on the cost function, dynamics of the Markov chain and observation probabilities so that the optimal scheduling policy has a threshold structure with respect to a monotone likelihood ratio (MLR) ordering. As a result, the computational complexity of implementing the optimal scheduling policy is inexpensive. We then present stochastic approximation algorithms for estimating the best linear MLR order threshold policy.