The roots of backpropagation: from ordered derivatives to neural networks and political forecasting
The roots of backpropagation: from ordered derivatives to neural networks and political forecasting
Neural and Adaptive Systems: Fundamentals through Simulations with CD-ROM
Neural and Adaptive Systems: Fundamentals through Simulations with CD-ROM
Chain rule and invariance principle on measure chains
Journal of Computational and Applied Mathematics - Dynamic equations on time scales
Partial dynamic equations on time scales
Journal of Computational and Applied Mathematics
Hamilton–Jacobi–Bellman Equations and Approximate Dynamic Programming on Time Scales
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Hi-index | 0.00 |
Backpropagation is the most widely used neural network learning technique. It is based on the mathematical notion of an ordered derivative. In this paper, we present a formulation of ordered derivatives and the backpropagation training algorithm using the important emerging area of mathematics known as the time scales calculus. This calculus, with its potential for application to a wide variety of interdisciplinary problems, is becoming a key area of mathematics. It is capable of unifying continuous and discrete analysis within one coherent theoretical framework. Using this calculus, we present here a generalization of backpropagation which is appropriate for cases beyond the specifically continuous or discrete. We develop a new multivariate chain rule of this calculus, define ordered derivatives on time scales, prove a key theorem about them, and derive the backpropagation weight update equations for a feedforward multilayer neural network architecture. By drawing together the time scales calculus and the area of neural network learning, we present the first connection of two major fields of research.