Classifier Fusion Using Local Confidence
ISMIS '02 Proceedings of the 13th International Symposium on Foundations of Intelligent Systems
Learning Object Representations Using A Priori Constraints Within ORASSYLL
Neural Computation
Classifier Ensembles with a Random Linear Oracle
IEEE Transactions on Knowledge and Data Engineering
IEEE Transactions on Neural Networks
Expert Systems with Applications: An International Journal
Extended decision template presentation for combining classifiers
Expert Systems with Applications: An International Journal
Deriving collective intelligence from reviews on the social Web using a supervised learning approach
Expert Systems with Applications: An International Journal
Ensemble confidence estimates posterior probability
MCS'05 Proceedings of the 6th international conference on Multiple Classifier Systems
Localized algorithms for multiple kernel learning
Pattern Recognition
Combining classifiers using nearest decision prototypes
Applied Soft Computing
Benchmarking local classification methods
Computational Statistics
Hi-index | 0.01 |
A structure composed of local linear perceptrons for approximating global class discriminants is investigated. Such local linear models may be combined in a cooperative or competitive way. In the cooperative model, a weighted sum of the outputs of the local perceptrons is computed where the weight is a function of the distance between the input and the position of the local perceptron. In the competitive model, the cost function dictates a mixture model where only one of the local perceptrons give output. Learning of the local models' positions and the linear mappings they implement are coupled and both supervised. We show that this is preferable to the uncoupled case where the positions are trained in an unsupervised manner before the separate, supervised training of mappings. We use goodness criteria based on the cross-entropy and give learning equations for both the cooperative and competitive cases. The coupled and uncoupled versions of cooperative and competitive approaches are compared among themselves and with multilayer perceptrons of sigmoidal hidden units and radial basis functions (RBFs) of Gaussian units on the application of recognition of handwritten digits. The criteria of comparison are the generalization accuracy, learning time, and the number of free parameters. We conclude that even on such a high-dimensional problem, such local models are promising. They generalize much better than RBF's and use much less memory. When compared with multilayer perceptrons, we note that local models learn much faster and generalize as well and sometimes better with comparable number of parameters