Neural Networks
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
An adaptive conjugate gradient learning algorithm for efficient training of neural networks
Applied Mathematics and Computation
Optimization of space structures by neural dynamics
Neural Networks
Pattern Recognition Letters
Computers in Biology and Medicine
A genetic feature weighting scheme for pattern recognition
Integrated Computer-Aided Engineering
Autonomous and cooperative robotic behavior based on fuzzy logic and genetic programming
Integrated Computer-Aided Engineering
Autonomous biped gait pattern based on Fuzzy-CMAC neural networks
Integrated Computer-Aided Engineering
Improved spiking neural networks for EEG classification and epilepsy and seizure detection
Integrated Computer-Aided Engineering
Neurodynamics and attractors in quantum associative memories
Integrated Computer-Aided Engineering
A distributed learning algorithm for particle systems
Integrated Computer-Aided Engineering
Integrated Computer-Aided Engineering
Lyapunov exponents/probabilistic neural networks for analysis of EEG signals
Expert Systems with Applications: An International Journal
Optimum cost design of reinforced concrete slabs using neural dynamics model
Engineering Applications of Artificial Intelligence
Probabilistic neural-network structure determination for pattern classification
IEEE Transactions on Neural Networks
On the capability of accommodating new classes within probabilistic neural networks
IEEE Transactions on Neural Networks
A Modified Probabilistic Neural Network for Partial Volume Segmentation in Brain MR Image
IEEE Transactions on Neural Networks
Identification of trends from patents using self-organizing maps
Expert Systems with Applications: An International Journal
Optimising operational costs using Soft Computing techniques
Integrated Computer-Aided Engineering
A scatter method for data and variable importance evaluation
Integrated Computer-Aided Engineering
Improvement of surface roughness models for face milling operations through dimensionality reduction
Integrated Computer-Aided Engineering
A supervised method for microcalcification cluster diagnosis
Integrated Computer-Aided Engineering
Integrated Computer-Aided Engineering
Pedestrian detection in far infrared images
Integrated Computer-Aided Engineering
Hi-index | 0.00 |
In recent years the Probabilistic Neural Network (PPN) has been used in a large number of applications due to its simplicity and efficiency. PNN assigns the test data to the class with maximum likelihood compared with other classes. Likelihood of the test data to each training data is computed in the pattern layer through a kernel density estimation using a simple Bayesian rule. The kernel is usually a standard probability distribution function such as a Gaussian function. A spread parameter is used as a global parameter which determines the width of the kernel. The Bayesian rule in the pattern layer estimates the conditional probability of each class given an input vector without considering any probable local densities or heterogeneity in the training data. In this paper, an enhanced and generalized PNN (EPNN) is presented using local decision circles (LDCs) to overcome the aforementioned shortcoming and improve its robustness to noise in the data. Local decision circles enable EPNN to incorporate local information and non-homogeneity existing in the training population. The circle has a radius which limits the contribution of the local decision. In the conventional PNN the spread parameter can be optimized for maximum classification accuracy. In the proposed EPNN two parameters, the spread parameter and the radius of local decision circles, are optimized to maximize the performance of the model. Accuracy and robustness of EPNN are compared with PNN using three different benchmark classification problems, iris data, diabetic data, and breast cancer data, and five different ratios of training data to testing data: 90:10, 80:20, 70:30, 60:40, and 50:50. EPNN provided the most accurate results consistently for all ratios. Robustness of PNN and EPNN is investigated using different values of signal to noise ratio (SNR). Accuracy of EPNN is consistently higher than accuracy of PNN at different levels of SNR and for all ratios of training data to testing data.