Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Statistical theory of learning curves under entropic loss criterion
Neural Computation
On the geometry of feedforward neural network error surfaces
Neural Computation
The metric structure of weight space
Neural Processing Letters
Natural gradient works efficiently in learning
Neural Computation
Algebraic geometrical methods for hierarchical learning machines
Neural Networks
Population coding and decoding in a neural field: a computational study
Neural Computation
On Some Singularities in Parameter Estimation Problems
Problems of Information Transmission
On the statistical properties of least-square estimators of layered neural networks
Systems and Computers in Japan
Difficulty of Singularity in Population Coding
Neural Computation
Algebraic Analysis for Nonidentifiable Learning Machines
Neural Computation
Population Coding with Correlation and an Unfaithful Model
Neural Computation
Nonmonotonic Generalization Bias of Gaussian Mixture Models
Neural Computation
Part 2: multilayer perceptron and natural gradient learning
New Generation Computing
Dynamics of learning near singularities in layered networks
Neural Computation
Asymptotic Law of Likelihood Ratio for Multilayer Perceptron Models
ISNN '08 Proceedings of the 5th international symposium on Neural Networks: Advances in Neural Networks
Singularity and Slow Convergence of the EM algorithm for Gaussian Mixtures
Neural Processing Letters
Information Geometry and Its Applications: Convex Function and Dually Flat Manifold
Emerging Trends in Visual Computing
Improving Training in the Vicinity of Temporary Minima
IWANN '09 Proceedings of the 10th International Work-Conference on Artificial Neural Networks: Part I: Bio-Inspired Systems: Computational and Ambient Intelligence
Equations of states in singular statistical estimation
Neural Networks
Conditional mixture model for correlated neuronal spikes
Neural Computation
Analysis of artificial neural network learning near temporary minima: A fuzzy logic approach
Fuzzy Sets and Systems
General bound of overfitting for MLP regression models
Neurocomputing
Dreaming of mathematical neuroscience for half a century
Neural Networks
Hi-index | 0.00 |
The parameter spaces of hierarchical systems such as multilayer perceptrons include singularities due to the symmetry and degeneration of hidden units. A parameter space forms a geometrical manifold, called the neuromanifold in the case of neural networks. Such a model is identified with a statistical model, and a Riemannian metric is given by the Fisher information matrix. However, the matrix degenerates at singularities. Such a singular structure is ubiquitous not only in multilayer perceptrons but also in the gaussian mixture probability densities, ARMA time-series model, and many other cases. The standard statistical paradigm of the Cramér-Rao theorem does not hold, and the singularity gives rise to strange behaviors in parameter estimation, hypothesis testing, Bayesian inference, model selection, and in particular, the dynamics of learning from examples. Prevailing theories so far have not paid much attention to the problem caused by singularity, relying only on ordinary statistical theories developed for regular (nonsingular) models. Only recently have researchers remarked on the effects of singularity, and theories are now being developed.This article gives an overview of the phenomena caused by the singularities of statistical manifolds related to multilayer perceptrons and gaussian mixtures. We demonstrate our recent results on these problems. Simple toy models are also used to show explicit solutions. We explain that the maximum likelihood estimator is no longer subject to the gaussian distribution even asymptotically, because the Fisher information matrix degenerates, that the model selection criteria such as AIC, BIC, and MDL fail to hold in these models, that a smooth Bayesian prior becomes singular in such models, and that the trajectories of dynamics of learning are strongly affected by the singularity, causing plateaus or slow manifolds in the parameter space. The natural gradient method is shown to perform well because it takes the singular geometrical structure into account. The generalization error and the training error are studied in some examples.