Neural Networks
Simulating Artificial Neural Networks on Parallel Architectures
Computer - Special issue: neural computing: companion issue to Spring 1996 IEEE Computational Science & Engineering
Weighted Parzen Windows for Pattern Classification
IEEE Transactions on Pattern Analysis and Machine Intelligence
Approximation algorithms for geometric problems
Approximation algorithms for NP-hard problems
Data Mining: Introductory and Advanced Topics
Data Mining: Introductory and Advanced Topics
Learning vector quantization for the probabilistic neural network
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
This work investigates the scalability of Probabilistic Neural Networks via parallelization and localization, and a chain gradient tuning. Since PNN model is inherently parallel three common parallel approaches are studied here, namely data parallel, neuron parallel and pipelining. Localization methods via clustering algorithms are utilized to reduce the hidden layer size of PNNs. A problem of localization may be present in the case of multi-class data. In this paper we propose two simple fast approximate solutions. The first is using sigma smoothing parameters obtained from the parallel PNN initial training directly to clustering. In this case a substantial reduction of neurons is achieved without significant loss of recognition accuracy. The second is an effort for an additional tuning. Via confidence outputs we employ a chain training approach to tune for the best possible PNN architecture.