Spectral technique for hidden layer neural network training
Pattern Recognition Letters
Spectral Techniques in Digital Logic
Spectral Techniques in Digital Logic
Solving multiclass learning problems via error-correcting output codes
Journal of Artificial Intelligence Research
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
On sequential construction of binary neural networks
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Various methods exist for reducing correlation between classifiers in a multiple classifier framework. The expectation is that the composite classifier will exhibit improved performance and/or be simpler to automate compared with a single classifier. In this paper we investigate how generalisation is affected by varying complexity of unstable base classifiers, implemented as identical single hidden layer MLP networks with fixed parameters. A technique that uses recursive partitioning for selectively perturbing the training set is also introduced, and shown to improve performance and reduce sensitivity to base classifier complexity. Benchmark experiments include artificial and real data with optimal error rates greater than eighteen percent.