Introduction to the theory of neural computation
Introduction to the theory of neural computation
Maximum-Likelihood Design of Layered Neural Networks
ICPR '96 Proceedings of the International Conference on Pattern Recognition (ICPR '96) Volume IV-Volume 7472 - Volume 7472
Probabilistic neural network playing and learning Tic-Tac-Toe
Pattern Recognition Letters - Special issue: Artificial neural networks in pattern recognition
A Subspace Approach to Texture Modelling by Using Gaussian Mixtures
ICPR '06 Proceedings of the 18th International Conference on Pattern Recognition - Volume 02
Recurrent Bayesian reasoning in probabilistic neural networks
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
Growing subspace pattern recognition methods and their neural-network models
IEEE Transactions on Neural Networks
Maximum likelihood training of probabilistic neural networks
IEEE Transactions on Neural Networks
Probabilistic design of layered neural networks based on their unified framework
IEEE Transactions on Neural Networks
Recognition of Properties by Probabilistic Neural Networks
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part II
Computational properties of probabilistic neural-networks
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part III
Hi-index | 0.00 |
When considering the probabilistic approach to neural networks in the framework of statistical pattern recognition we assume approximation of class-conditional probability distributions by finite mixtures of product components. The mixture components can be interpreted as probabilistic neurons in neurophysiological terms and, in this respect, the fixed probabilistic description contradicts the well known short-term dynamic properties of biological neurons. By introducing iterative schemes of recognition we show that some parameters of probabilistic neural networks can be ''released'' for the sake of dynamic processes without disturbing the statistically correct decision making. In particular, we can iteratively adapt the mixture component weights or modify the input pattern in order to facilitate correct recognition. Both procedures are shown to converge monotonically as a special case of the well known EM algorithm for estimating mixtures.