Multilayer neural networks and Bayes decision theory
Neural Networks
Neural Computation
Neural Computation
Bayesian decision theory on three-layer neural networks
Neurocomputing
Multicategory bayesian decision using a three-layer neural network
ICANN/ICONIP'03 Proceedings of the 2003 joint international conference on Artificial neural networks and neural information processing
Bayesian learning of neural networks adapted to changes of prior probabilities
ICANN'05 Proceedings of the 15th international conference on Artificial neural networks: formal models and their applications - Volume Part II
Discriminant analysis by a neural network with mahalanobis distance
ICANN'06 Proceedings of the 16th international conference on Artificial Neural Networks - Volume Part II
A new algorithm for learning mahalanobis discriminant functions by a neural network
ICONIP'11 Proceedings of the 18th international conference on Neural Information Processing - Volume Part II
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part V
Hi-index | 0.00 |
Learning of Bayesian discriminant functions is a difficult task for ordinary one-hidden-layer neural networks, because the teacher signals are dichotomic random samples. When the neural network is trained, the parameters, the weights and thresholds, are usually all supposed to be optimized. However, those included in the activation functions of the hidden-layer units are optimized at the second step of the BP learning. We often experience difficulty in training such 'inner' parameters when teacher signals are dichotomic. To overcome this difficulty, we construct one-hidden-layer neural networks with a smaller number of the inner parameters to be optimized, fixing some components of the parameters. This inevitably causes increment of the hidden-layer units, but the network learns the Bayesian discriminant function better than ordinary neural networks.