A Survey of solution techniques for the partially observed Markov decision process
Annals of Operations Research
A survey of algorithmic methods for partially observed Markov decision processes
Annals of Operations Research
Optimal stopping by means of point process observations with applications in reliability
Mathematics of Operations Research
Discounted replacement, maintenance, and repair problems in reliability
Mathematics of Operations Research
On the complexity of partially observed Markov decision processes
Theoretical Computer Science - Special issue on complexity theory and the theory of algorithms as developed in the CIS
Optimal Preventive Replacement Under Minimal Repair And Random Repair Cost
Mathematics of Operations Research
Adaptive Markov Control Processes
Adaptive Markov Control Processes
Stochastic Optimal Control: The Discrete-Time Case
Stochastic Optimal Control: The Discrete-Time Case
Value-function approximations for partially observable Markov decision processes
Journal of Artificial Intelligence Research
Computation of approximate optimal policies in a partially observed inventory model with rain checks
Automatica (Journal of IFAC)
Optimal Bayesian estimation and control scheme for gear shaft fault detection
Computers and Industrial Engineering
Value of condition monitoring in infrastructure maintenance
Computers and Industrial Engineering
Hi-index | 0.00 |
In this paper, we present a framework for the condition-based maintenance optimization. A technical system which can be in one of N operational states or in a failure state is considered. The system state is not observable, except the failure state. The information that is stochastically related to the system state is obtained through condition monitoring at equidistant inspection times. The system can be replaced at any time; a preventive replacement is less costly than failure replacement. The objective is to find a replacement policy minimizing the long run expected average cost per unit time. The replacement problem is formulated as an optimal stopping problem with partial information and transformed to a problem with complete information by applying the projection theorem to a smooth semimartingale process in the objective function. The dynamic equation is derived and analyzed in the piecewise deterministic Markov process stopping framework. The contraction property is shown and an algorithm for the calculation of the value function is presented, illustrated by an example.