A practical Bayesian framework for backpropagation networks
Neural Computation
Multilayer neural networks and Bayes decision theory
Neural Networks
Bayesian approach for neural networks—review and case studies
Neural Networks
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Neural Computation
Neural Computation
Multicategory bayesian decision using a three-layer neural network
ICANN/ICONIP'03 Proceedings of the 2003 joint international conference on Artificial neural networks and neural information processing
The multilayer perceptron as an approximation to a Bayes optimal discriminant function
IEEE Transactions on Neural Networks
Learning of Bayesian Discriminant Functions by a Layered Neural Network
Neural Information Processing
Multi-category Bayesian Decision by Neural Networks
ICANN '08 Proceedings of the 18th international conference on Artificial Neural Networks, Part I
Bayesian learning of neural networks adapted to changes of prior probabilities
ICANN'05 Proceedings of the 15th international conference on Artificial neural networks: formal models and their applications - Volume Part II
Discriminant analysis by a neural network with mahalanobis distance
ICANN'06 Proceedings of the 16th international conference on Artificial Neural Networks - Volume Part II
A new algorithm for learning mahalanobis discriminant functions by a neural network
ICONIP'11 Proceedings of the 18th international conference on Neural Information Processing - Volume Part II
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part V
Hi-index | 0.01 |
We discuss the Bayesian decision theory on neural networks. In the two-category case where the state-conditional probabilities are normal, a three-layer neural network having d hidden layer units can approximate the posterior probability in L^p(R^d,p), where d is the dimension of the space of observables. We extend this result to multicategory cases. Then, the number of the hidden layer units must be increased, but can be bounded by 12d(d+1) irrespective of the number of categories if the neural network has direct connections between the input and output layers. In the case where the state-conditional probability is one of familiar probability distributions such as binomial, multinomial, Poisson, negative binomial distributions and so on, a two-layer neural network can approximate the posterior probability.