Feature selection for the SVM: An application to hypertension diagnosis
Expert Systems with Applications: An International Journal
Increasing classification efficiency with multiple mirror classifiers
Expert Systems with Applications: An International Journal
A Novel Elliptical Basis Function Neural Networks Model Based on a Hybrid Learning Algorithm
ISNN '07 Proceedings of the 4th international symposium on Neural Networks: Advances in Neural Networks
Data gravitation based classification
Information Sciences: an International Journal
Expert Systems with Applications: An International Journal
Neurocomputing
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Expediting model selection for support vector machines based on an advanced data reduction algorithm
PRICAI'06 Proceedings of the 9th Pacific Rim international conference on Artificial intelligence
Particle swarm optimization aided orthogonal forward regression for unified data modeling
IEEE Transactions on Evolutionary Computation
Grey-box radial basis function modelling
Neurocomputing
Development and performance evaluation of neural network classifiers for Indian internet shoppers
Expert Systems with Applications: An International Journal
Information Sciences: an International Journal
Hierarchical radial basis function neural networks for classification problems
ISNN'06 Proceedings of the Third international conference on Advances in Neural Networks - Volume Part I
Accurate Prediction of Coronary Artery Disease Using Reliable Diagnosis System
Journal of Medical Systems
Gravitation based classification
Information Sciences: an International Journal
Stock indices prediction using radial basis function neural network
SEMCCO'12 Proceedings of the Third international conference on Swarm, Evolutionary, and Memetic Computing
Hi-index | 0.01 |
This work presents a novel learning algorithm for efficient construction of the radial basis function (RBF) networks that can deliver the same level of accuracy as the support vector machines (SVMs) in data classification applications. The proposed learning algorithm works by constructing one RBF subnetwork to approximate the probability density function of each class of objects in the training data set. With respect to algorithm design, the main distinction of the proposed learning algorithm is the novel kernel density estimation algorithm that features an average time complexity of O(nlogn), where n is the number of samples in the training data set. One important advantage of the proposed learning algorithm, in comparison with the SVM, is that the proposed learning algorithm generally takes far less time to construct a data classifier with an optimized parameter setting. This feature is of significance for many contemporary applications, in particular, for those applications in which new objects are continuously added into an already large database. Another desirable feature of the proposed learning algorithm is that the RBF networks constructed are capable of carrying out data classification with more than two classes of objects in one single run. In other words, unlike with the SVM, there is no need to resort to mechanisms such as one-against-one or one-against-all for handling datasets with more than two classes of objects. The comparison with SVM is of particular interest, because it has been shown in a number of recent studies that SVM generally are able to deliver higher classification accuracy than the other existing data classification algorithms. As the proposed learning algorithm is instance-based, the data reduction issue is also addressed in this paper. One interesting observation in this regard is that, for all three data sets used in data reduction experiments, the number of training samples remaining after a naïve data reduction mechanism is applied is quite close to the number of support vectors identified by the SVM software. This paper also compares the performance of the RBF networks constructed with the proposed learning algorithm and those constructed with a conventional cluster-based learning algorithm. The most int- eresting observation learned is that, with respect to data classification, the distributions of training samples near the boundaries between different classes of objects carry more crucial information than the distributions of samples in the inner parts of the clusters.