A characterization of all solutions to the four block general distance problem
SIAM Journal on Control and Optimization
Identification and control using a hybrid reinforcement learning system
International Journal in Computer Simulation - Special issue: intelligent simulation of high autonomy systems
Tree based discretization for continuous state space reinforcement learning
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
Adaptive internal state space construction method for reinforcement learning of a real-world agent
Neural Networks - Special issue on organisation of computation in brain-like systems
Nonlinear Control Systems
Digital Control of Dynamic Systems
Digital Control of Dynamic Systems
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences
Constructing basis functions from directed graphs for value function approximation
Proceedings of the 24th international conference on Machine learning
Hi-index | 0.00 |
An orthonormal basis adaptation method for function approximation was developed and applied to reinforcement learning with multi-dimensional continuous state space. First, a basis used for linear function approximation of a control function is set to an orthonormal basis. Next, basis elements with small activities are replaced with other candidate elements as learning progresses. As this replacement is repeated, the number of basis elements with large activities increases. Example chaos control problems for multiple logistic maps were solved, demonstrating that the method for adapting an orthonormal basis can modify a basis while holding the orthonormality in accordance with changes in the environment to improve the performance of reinforcement learning and to eliminate the adverse effects of redundant noisy states.