The roots of backpropagation: from ordered derivatives to neural networks and political forecasting
The roots of backpropagation: from ordered derivatives to neural networks and political forecasting
Deterministic global optimal FNN training algorithms
Neural Networks
Global Optimization for Neural Network Training
Computer - Special issue: neural computing: companion issue to Spring 1996 IEEE Computational Science & Engineering
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Neural Networks for Statistical Modeling
Neural Networks for Statistical Modeling
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Neural Networks in Business: Techniques and Applications
Neural Networks in Business: Techniques and Applications
Neuro-Dynamic Programming
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Neural Networks in Business Forecasting
Neural Networks in Business Forecasting
Methods and Applications of Interval Analysis (SIAM Studies in Applied and Numerical Mathematics) (Siam Studies in Applied Mathematics, 2.)
Comparison of Stochastic Global Optimization Methods to Estimate Neural Network Weights
Neural Processing Letters
Pattern Recognition and Neural Networks
Pattern Recognition and Neural Networks
A global optimization algorithm based on novel interval analysis for training neural networks
ISICA'07 Proceedings of the 2nd international conference on Advances in computation and intelligence
Deterministic global optimization for FNN training
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Numerical methods of interval analysis in learning neural network
Automation and Remote Control
Hi-index | 0.00 |
The problem of output optimization within a specified input space of neural networks (NNs) with fixed weights is discussed in this paper. The problem is (highly) nonlinear when nonlinear activation functions are used. This global optimization problem is encountered in the reinforcement learning (RL) community. Interval analysis is applied to guarantee that all solutions are found to any degree of accuracy with guaranteed bounds. The major drawbacks of interval analysis, i.e., dependency effect and high-computational load, are both present for the problem of NN output optimization. Taylor models (TMs) are introduced to reduce these drawbacks. They have excellent convergence properties for small intervals. However, the dependency effect still remains and is even made worse when evaluating large input domains. As an alternative to TMs, a different form of polynomial inclusion functions, called the polynomial set (PS) method, is introduced. This new method has the property that the bounds on the network output are tighter or at least equal to those obtained through standard interval arithmetic (IA). Experiments show that the PS method outperforms the other methods for the NN output optimization problem.