Connectionist Speech Recognition: A Hybrid Approach
Connectionist Speech Recognition: A Hybrid Approach
Neural Computation
A Neural Network Based Model for Prognosis of Early Breast Cancer
Applied Intelligence
Stochastic Organization of Output Codes in Multiclass Learning Problems
Neural Computation
Risk-sensitive loss functions for sparse multi-category classification problems
Information Sciences: an International Journal
No-reference image quality assessment using modified extreme learning machine classifier
Applied Soft Computing
Journal of Artificial Intelligence Research
Online adaptive radial basis function networks for robust object tracking
Computer Vision and Image Understanding
Estimating the class posterior probabilities in protein secondary structure prediction
PRIB'11 Proceedings of the 6th IAPR international conference on Pattern recognition in bioinformatics
Texture segmentation using neural networks and multi-scale wavelet features
ICNC'05 Proceedings of the First international conference on Advances in Natural Computation - Volume Part II
Neural network based texture segmentation using a markov random field model
ISNN'06 Proceedings of the Third international conference on Advnaces in Neural Networks - Volume Part II
Hi-index | 0.00 |
It is now well known that neural classifiers can learn to compute a posteriori probabilities of classes in input space. This note offers a shorter proof than the traditional ones. Only one class has to be considered and straightforward minimization of the error function provides the main result. The method can be extended to any kind of differentiable error function. We also present a simple visual proof of the same theorem, which stresses the fact that the network must be perfectly trained and have enough plasticity.