Operations Research
Valuation-based systems for Bayesian decision analysis
Operations Research
Decision making using probabilistic inference methods
UAI '92 Proceedings of the eighth conference on Uncertainty in Artificial Intelligence
d-Separation: From Theorems to Algorithms
UAI '89 Proceedings of the Fifth Annual Conference on Uncertainty in Artificial Intelligence
Lazy evaluation of symmetric Bayesian decision problems
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Welldefined decision scenarios
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Efficient value of information computation
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
UAI'98 Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence
Display of information for time-critical decision making
UAI'95 Proceedings of the Eleventh conference on Uncertainty in artificial intelligence
Hi-index | 0.00 |
When solving a decision problem we want to determine an optimal policy for the decision variables of interest. A policy for a decision variable is in principle a function over its past. However, some of the past may be irrelevant and for both communicational as well as computational reasons it is important not to deal with redundant variables in the policies. In this paper we present a method to decompose a decision problem into a collection of smaller sub-problems s.t. a solution (with no redundant variables) to the original decision problem can be found by solving the sub-problems independently. The method is based on an operational characterization of the future variables being relevant for a decision variable, thereby also providing a characterization of those parts of a decision problem that are relevant for a particular decision.