Back propagation is sensitive to initial conditions
NIPS-3 Proceedings of the 1990 conference on Advances in neural information processing systems 3
IEEE Transactions on Pattern Analysis and Machine Intelligence
Boosting the margin: A new explanation for the effectiveness of voting methods
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
UAI '88 Proceedings of the Fourth Annual Conference on Uncertainty in Artificial Intelligence
Comparing Pure Parallel Ensemble Creation Techniques Against Bagging
ICDM '03 Proceedings of the Third IEEE International Conference on Data Mining
A Decomposition Scheme Based on Error-Correcting Output Codes for Ensembles of Text Categorisers
ICITA '05 Proceedings of the Third International Conference on Information Technology and Applications (ICITA'05) Volume 2 - Volume 02
Using diversity measures for generating error-correcting output codes in classifier ensembles
Pattern Recognition Letters
Robust classification with context-sensitive features
IEA/AIE'93 Proceedings of the 6th international conference on Industrial and engineering applications of artificial intelligence and expert systems
Solving multiclass learning problems via error-correcting output codes
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
Individual classifiers do not always yield satisfactory results. In the field of data mining, failures are mainly thought to be caused by the limitations inherent in the data itself, which stem from different reasons for creating data files and their various applications. One of the proposed ways of dealing with these kinds of shortcomings is to employ classifier ensembles. Their application involves creating a set of models for the same data file or for different subsets of a specified data file. Although in many cases this approach results in a visible increase of classification accuracy, it considerably complicates, or, in some cases, effectively hinders interpretation of the obtained results. The reasons for this are the methods of defining learning tasks which rely on randomizing. The purpose of this paper is to present an idea for using data contexts to define learning tasks for classifier ensembles. The achieved results are promising.