Hierarchical mixtures of experts and the EM algorithm
Neural Computation
Neural networks for pattern recognition
Neural networks for pattern recognition
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Comparison of adaptive methods for function estimation from samples
IEEE Transactions on Neural Networks
Fast training of recurrent networks based on the EM algorithm
IEEE Transactions on Neural Networks
On the Kernel Widths in Radial-Basis Function Networks
Neural Processing Letters
Generalized multiscale radial basis function networks
Neural Networks
SETN '08 Proceedings of the 5th Hellenic conference on Artificial Intelligence: Theories, Models and Applications
Kernel Width Optimization for Faulty RBF Neural Networks with Multi-node Open Fault
Neural Processing Letters
A neural network of smooth hinge functions
IEEE Transactions on Neural Networks
Sparse RBF Networks with Multi-kernels
Neural Processing Letters
Hi-index | 0.00 |
In this paper, we propose a new Expectation-Maximization (EM) algorithm which speeds up the training of feedforward networks with local activation functions such as the Radial Basis Function (RBF) network. In previously proposed approaches, at each E-step the residual is decomposed equally among the units or proportionally to the weights of the output layer. However, these approaches tend to slow down the training of networks with local activation units. To overcome this drawback in this paper we use a new E-step which applies a soft decomposition of the residual among the units. In particular, the decoupling variables are estimated as the posterior probability of a component given an input-output pattern. This adaptive decomposition takes into account the local nature of the activation function and, by allowing the RBF units to focus on different subregions of the input space, the convergence is improved. The proposed EM training algorithm has been applied to the nonlinear modeling of a MESFET transistor.