Optimal linear combinations of neural networks
Neural Networks
Ensemble of Linear Perceptrons with Confidence Level Output
HIS '04 Proceedings of the Fourth International Conference on Hybrid Intelligent Systems
Hi-index | 0.00 |
This paper describes a method for production of an ensemble of general classifiers using unsupervised learning. The method uses the `divide and conquer' strategy. Using competitive learning the feature space is divided into subregions where the classifiers are constructed. The structure of the ensemble starts from a single member and new members are being added during the training. The growth of the ensemble is self determined until the ensemble reaches the desired accuracy. The overall response of the ensemble to an input pattern is represented by the output of a winning member for the particular pattern. The method is generic, i.e. it is not bound to a specific type of classifier and it is suitable for parallel implementation. It is possible to use the method for data mining. A basic wide margin linear classifier is used in the experiments here. Experimental results achieved on artificial and real world data are presented and compared to the results of Gaussian SVM. Parallel implementation of the method is described.