Artificial Neural Networks: Approximation and Learning Theory
Artificial Neural Networks: Approximation and Learning Theory
Neural Model Selection: How to Determine the Fittest Criterion?
ICANN '97 Proceedings of the 7th International Conference on Artificial Neural Networks
Singularities Affect Dynamics of Learning in Neuromanifolds
Neural Computation
Neural modeling for time series: A statistical stepwise method for weight elimination
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
We consider regression models involving multilayer perceptrons (MLP) with one hidden layer and a Gaussian noise. The data are assumed to be generated by a true MLP model and the estimation of the parameters of the MLP is done by maximizing the likelihood of the model. When the number of hidden units of the model is over-estimated, the Fischer information matrix of the model is singular and the asymptotic behavior of the LR statistic is unknown or can be divergent if the set of possible parameter is too large. This paper deals with this case, and gives the exact asymptotic law of the LR statistic. Namely, if the parameters of the MLP lie in a suitable compact set, we show that the LR statistic converges to the maximum of the square of a Gaussian process indexed by a class of limit score functions.