The theoretical foundations of statistical learning theory based on fuzzy number samples
Information Sciences: an International Journal
A neural network-based multi-agent classifier system
Neurocomputing
Radial Basis Function network learning using localized generalization error bound
Information Sciences: an International Journal
AMPSO: a new particle swarm method for nearest neighborhood classification
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
IEEE Transactions on Neural Networks
IPCM separability ratio for supervised feature selection
SMC'09 Proceedings of the 2009 IEEE international conference on Systems, Man and Cybernetics
A novel dynamic fusion method using localized generalization error model
SMC'09 Proceedings of the 2009 IEEE international conference on Systems, Man and Cybernetics
A hybrid particle swarm optimization and its application in neural networks
Expert Systems with Applications: An International Journal
Dynamic fusion method using Localized Generalization Error Model
Information Sciences: an International Journal
IDEAL'12 Proceedings of the 13th international conference on Intelligent Data Engineering and Automated Learning
Hi-index | 0.00 |
The generalization error bounds found by current error models using the number of effective parameters of a classifier and the number of training samples are usually very loose. These bounds are intended for the entire input space. However, support vector machine (SVM), radial basis function neural network (RBFNN), and multilayer perceptron neural network (MLPNN) are local learning machines for solving problems and treat unseen samples near the training samples to be more important. In this paper, we propose a localized generalization error model which bounds from above the generalization error within a neighborhood of the training samples using stochastic sensitivity measure. It is then used to develop an architecture selection technique for a classifier with maximal coverage of unseen samples by specifying a generalization error threshold. Experiments using 17 University of California at Irvine (UCI) data sets show that, in comparison with cross validation (CV), sequential learning, and two other ad hoc methods, our technique consistently yields the best testing classification accuracy with fewer hidden neurons and less training time.