Efficient Block Training of Multilayer Perceptrons
Neural Computation
Risk-sensitive loss functions for sparse multi-category classification problems
Information Sciences: an International Journal
Cost-sensitive learning based on Bregman divergences
Machine Learning
Multicategory nets of single-layer perceptrons: complexity and sample-size issues
IEEE Transactions on Neural Networks
Maximum a posteriori based kernel classifier trained by linear programming
SSPR&SPR'10 Proceedings of the 2010 joint IAPR international conference on Structural, syntactic, and statistical pattern recognition
Learning in the feed-forward random neural network: A critical review
Performance Evaluation
Quadratically constrained maximum a posteriori estimation for binary classifier
MLDM'11 Proceedings of the 7th international conference on Machine learning and data mining in pattern recognition
Classifying patterns with missing values using Multi-Task Learning perceptrons
Expert Systems with Applications: An International Journal
Hi-index | 0.00 |
The problem of designing cost functions to estimate a posteriori probabilities in multiclass problems is addressed. We establish necessary and sufficient conditions that these costs must satisfy in one-class one-output networks whose outputs are consistent with probability laws. We focus our attention on a particular subset of the corresponding cost functions which verify two common properties: symmetry and separability (well-known cost functions, such as the quadratic cost or the cross entropy are particular cases in this subset). Finally, we present a universal stochastic gradient learning rule for single-layer networks, in the sense of minimizing a general version of these cost functions for a wide family of nonlinear activation functions