Accelerated learning in layered neural networks
Complex Systems
Connectionist learning procedures
Artificial Intelligence
New developments of the Z-EDM algorithm
ISDA '06 Proceedings of the Sixth International Conference on Intelligent Systems Design and Applications - Volume 01
Adaptive mixtures of local experts
Neural Computation
Neural network classification: maximizing zero-error density
ICAPR'05 Proceedings of the Third international conference on Advances in Pattern Recognition - Volume Part I
A novel objective function for improved phoneme recognition using time-delay neural networks
IEEE Transactions on Neural Networks
The Influence of the Risk Functional in Data Classification with MLPs
ICANN '08 Proceedings of the 18th international conference on Artificial Neural Networks, Part I
Expert Systems with Applications: An International Journal
IDEAL'12 Proceedings of the 13th international conference on Intelligent Data Engineering and Automated Learning
Hi-index | 0.00 |
The learning process of a multilayer perceptron requires the optimization of an error function E(y,t) comparing the predicted output, y, and the observed target, t. We review some usual error functions, analyze their mathematical properties for data classification purposes, and introduce a new one, E"E"x"p, inspired by the Z-EDM algorithm that we have recently proposed. An important property of E"E"x"p is its ability to emulate the behavior of other error functions by the sole adjustment of a real-valued parameter. In other words, E"E"x"p is a sort of generalized error function embodying complementary features of other functions. The experimental results show that the flexibility of the new, generalized, error function allows one to obtain the best results achievable with the other functions with a performance improvement in some cases.