Self-organizing maps
On Virtually Binary Nature of Probabilistic Neural Networks
SSPR '98/SPR '98 Proceedings of the Joint IAPR International Workshops on Advances in Pattern Recognition
Maximum-Likelihood Design of Layered Neural Networks
ICPR '96 Proceedings of the International Conference on Pattern Recognition (ICPR '96) Volume IV-Volume 7472 - Volume 7472
A Nonlinear Mapping for Data Structure Analysis
IEEE Transactions on Computers
Recurrent Bayesian reasoning in probabilistic neural networks
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
Recognition of Properties by Probabilistic Neural Networks
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part II
Hi-index | 0.00 |
In order to design probabilistic neural networks in the framework of pattern recognition we estimate class-conditional probability distributions in the form of finite mixtures of product components. As the mixture components correspond to neurons we specify the properties of neurons in terms of component parameters. The probabilistic features defined by neuron outputs can be used to transform the classification problem without information loss and, simultaneously, the Shannon entropy of the feature space is minimized. We show that, instead of dimensionality reduction, the decision problem can be simplified by using binary approximation of the probabilistic features. In experiments the resulting binary features improve recognition accuracy but also they are nearly independent - in accordance with the minimum entropy property.