Natural gradient works efficiently in learning
Neural Computation
On-line learning and stochastic approximations
On-line learning in neural networks
The Geometry of Algorithms with Orthogonality Constraints
SIAM Journal on Matrix Analysis and Applications
Neural Networks: Tricks of the Trade, this book is an outgrowth of a 1996 NIPS workshop
Practical Consideration on Generalization Property of Natural Gradient Learning
IWANN '01 Proceedings of the 6th International Work-Conference on Artificial and Natural Neural Networks: Connectionist Models of Neurons, Learning Processes and Artificial Intelligence-Part I
Generalization Error and Training Error at Singularities of Multilayer Perceptrons
IWANN '01 Proceedings of the 6th International Work-Conference on Artificial and Natural Neural Networks: Connectionist Models of Neurons, Learning Processes and Artificial Intelligence-Part I
Network Optimization through Learning and Pruning in Neuromanifold
PRICAI '02 Proceedings of the 7th Pacific Rim International Conference on Artificial Intelligence: Trends in Artificial Intelligence
Online adaptive decision trees
Neural Computation
Singularities Affect Dynamics of Learning in Neuromanifolds
Neural Computation
Part 2: multilayer perceptron and natural gradient learning
New Generation Computing
Natural learning in NLDA networks
Neural Networks
Nonlinear system identification using neural networks trained with natural gradient descent
EURASIP Journal on Applied Signal Processing
EURASIP Journal on Applied Signal Processing
A New Natural Policy Gradient by Stationary Distribution Metric
ECML PKDD '08 Proceedings of the European conference on Machine Learning and Knowledge Discovery in Databases - Part II
Singularity and Slow Convergence of the EM algorithm for Gaussian Mixtures
Neural Processing Letters
Adaptive improved natural gradient algorithm for blind source separation
Neural Computation
SGD-QN: Careful Quasi-Newton Stochastic Gradient Descent
The Journal of Machine Learning Research
An efficient learning algorithm using natural gradient and second order information of error surface
PRICAI'00 Proceedings of the 6th Pacific Rim international conference on Artificial intelligence
The Journal of Machine Learning Research
A simplified natural gradient learning algorithm
Advances in Artificial Neural Systems
Natural conjugate gradient training of multilayer perceptrons
ICANN'06 Proceedings of the 16th international conference on Artificial Neural Networks - Volume Part I
Analysis of a natural gradient algorithm on monotonic convex-quadratic-composite functions
Proceedings of the 14th annual conference on Genetic and evolutionary computation
Fast and Stable Learning Utilizing Singular Regions of Multilayer Perceptron
Neural Processing Letters
Hi-index | 0.00 |
The natural gradient learning method is known to have ideal performances for on-line training of multilayer perceptrons. It avoids plateaus, which give rise to slow convergence of the backpropagation method. It is Fisher efficient, whereas the conventional method is not. However, for implementing the method, it is necessary to calculate the Fisher information matrix and its inverse, which is practically very difficult. This article proposes an adaptive method of directly obtaining the inverse of the Fisher information matrix. It generalizes the adaptive Gauss-Newton algorithms and provides a solid theoretical justification of them. Simulations show that the proposed adaptive method works very well for realizing natural gradient learning.