Adaptive mixtures of local experts
Neural Computation
An error-entropy minimization algorithm for supervised training ofnonlinear adaptive systems
IEEE Transactions on Signal Processing
LEGClust—A Clustering Algorithm Based on Layered Entropic Subgraphs
IEEE Transactions on Pattern Analysis and Machine Intelligence
Batch-sequential algorithm for neural networks trained with entropic criteria
ICANN'05 Proceedings of the 15th international conference on Artificial neural networks: formal models and their applications - Volume Part II
Single layer complex valued neural network with entropic cost function
ICANN'11 Proceedings of the 21th international conference on Artificial neural networks - Volume Part I
Hi-index | 0.00 |
The training of Neural Networks and particularly Multi-Layer Perceptrons (MLP's) is made by minimizing an error function usually known as "cost function". In our previous works, we apply the Error Entropy Minimization (EEM) algorithm in classification and its optimized version using, as cost function, the entropy of the errors between the outputs and the desired targets of the neural network. One of the difficulties in implementing the EEM algorithm is the choice of the smoothing parameter, also known as window size, in the Parzen Window probability density function estimation for the computation of the entropy and its gradient. We present here a formula yielding the value of the smoothing parameter, depending on the number of data samples and on the neural network output dimension. Several experiments with real data sets were made in order to show the validity of the proposed formula.