Multilayer neural networks and Bayes decision theory
Neural Networks
Neural Computation
Neural Computation
Learning of Bayesian Discriminant Functions by a Layered Neural Network
Neural Information Processing
Multi-category Bayesian Decision by Neural Networks
ICANN '08 Proceedings of the 18th international conference on Artificial Neural Networks, Part I
Learning of Mahalanobis Discriminant Functions by a Neural Network
ICONIP '09 Proceedings of the 16th International Conference on Neural Information Processing: Part I
Bayesian decision theory on three-layer neural networks
Neurocomputing
Multicategory bayesian decision using a three-layer neural network
ICANN/ICONIP'03 Proceedings of the 2003 joint international conference on Artificial neural networks and neural information processing
Bayesian learning of neural networks adapted to changes of prior probabilities
ICANN'05 Proceedings of the 15th international conference on Artificial neural networks: formal models and their applications - Volume Part II
Discriminant analysis by a neural network with mahalanobis distance
ICANN'06 Proceedings of the 16th international conference on Artificial Neural Networks - Volume Part II
Hi-index | 0.00 |
It is well known that a neural network can learn Bayesian discriminant functions. In the two-category normal-distribution case, a shift by a constant of the logit transform of the network output approximates a corresponding Mahalanobis discriminant function [7]. In [10], we have proposed an algorithm for estimating the constant, but it requires the network to be trained twice, in one of which the teacher signals must be shifted by the mean vectors. In this paper, we propose a more efficient algorithm for estimating the constant with which the network is trained only once.