Random number generation and quasi-Monte Carlo methods
Random number generation and quasi-Monte Carlo methods
Neural networks and the bias/variance dilemma
Neural Computation
Backpropagation: basics and new developments
The handbook of brain theory and neural networks
Application of orthogonal arrays and MARS to inventory forecasting stochastic dynamic programs
Computational Statistics & Data Analysis
Dynamic Programming and Optimal Control
Dynamic Programming and Optimal Control
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Neuro-Dynamic Programming
Handbook of Learning and Approximate Dynamic Programming (IEEE Press Series on Computational Intelligence)
Sequential frameworks for statistics-based value function representation in approximate dynamic programming
Reinforcement learning: a survey
Journal of Artificial Intelligence Research
Computational Optimization and Applications
Functional Optimization Through Semilocal Approximate Minimization
Operations Research
Universal approximation bounds for superpositions of a sigmoidal function
IEEE Transactions on Information Theory
Hi-index | 0.01 |
Approximate dynamic programming (ADP) commonly employs value function approximation to numerically solve complex dynamic programming problems. A statistical perspective of value function approximation employs a design and analysis of computer experiments (DACE) approach, where the ''computer experiment'' yields points on the value function curve. The DACE approach has been used to numerically solve high-dimensional, continuous-state stochastic dynamic programming, and performs two tasks primarily: (1) design of experiments and (2) statistical modeling. The use of design of experiments enables more efficient discretization. However, identifying the appropriate sample size is not straightforward. Furthermore, identifying the appropriate model structure is a well-known problem in the field of statistics. In this paper, we present a sequential method that can adaptively determine both sample size and model structure. Number-theoretic methods (NTM) are used to sequentially grow the experimental design because of their ability to fill the design space. Feed-forward neural networks (NNs) are used for statistical modeling because of their adjustability in structure-complexity . This adaptive value function approximation (AVFA) method must be automated to enable efficient implementation within ADP. An AVFA algorithm is introduced, that increments the size of the state space training data in each sequential step, and for each sample size a successive model search process is performed to find an optimal NN model. The new algorithm is tested on a nine-dimensional inventory forecasting problem.