Bilinear separation of two sets in n-space
Computational Optimization and Applications
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Neuro-Dynamic Programming
The linear programming approach to approximate dynamic programming: theory and application
The linear programming approach to approximate dynamic programming: theory and application
Least-squares policy iteration
The Journal of Machine Learning Research
The Linear Programming Approach to Approximate Dynamic Programming
Operations Research
On Constraint Sampling in the Linear Programming Approach to Approximate Dynamic Programming
Mathematics of Operations Research
Learning tetris using the noisy cross-entropy method
Neural Computation
A Price-Directed Approach to Stochastic Inventory/Routing
Operations Research
Approximate Dynamic Programming: Solving the Curses of Dimensionality (Wiley Series in Probability and Statistics)
Bidimensional packing by bilinear programming
Mathematical Programming: Series A and B
Constraint relaxation in approximate linear programs
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Anytime coordination using separable bilinear programs
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 1
Efficient solution algorithms for factored MDPs
Journal of Artificial Intelligence Research
Competitive coevolution through evolutionary complexification
Journal of Artificial Intelligence Research
Optimization-based approximate dynamic programming
Optimization-based approximate dynamic programming
An approach to fuzzy control of nonlinear systems: stability and design issues
IEEE Transactions on Fuzzy Systems
Construction of approximation spaces for reinforcement learning
The Journal of Machine Learning Research
Hi-index | 0.00 |
Value function approximation methods have been successfully used in many applications, but the prevailing techniques often lack useful a priori error bounds. We propose a new approximate bilinear programming formulation of value function approximation, which employs global optimization. The formulation provides strong a priori guarantees on both robust and expected policy loss by minimizing specific norms of the Bellman residual. Solving a bilinear program optimally is NP-hard, but this worst-case complexity is unavoidable because the Bellman-residual minimization itself is NP-hard. We describe and analyze the formulation as well as a simple approximate algorithm for solving bilinear programs. The analysis shows that this algorithm offers a convergent generalization of approximate policy iteration. We also briefly analyze the behavior of bilinear programming algorithms under incomplete samples. Finally, we demonstrate that the proposed approach can consistently minimize the Bellman residual on simple benchmark problems.