An introduction to computational learning theory
An introduction to computational learning theory
Machine Learning
Improved Boosting Algorithms Using Confidence-rated Predictions
Machine Learning - The Eleventh Annual Conference on computational Learning Theory
Machine Learning
Machine Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Computational Statistics & Data Analysis - Nonlinear methods and data mining
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
A Comparison of Decision Tree Ensemble Creation Techniques
IEEE Transactions on Pattern Analysis and Machine Intelligence
An empirical evaluation of bagging and boosting
AAAI'97/IAAI'97 Proceedings of the fourteenth national conference on artificial intelligence and ninth conference on Innovative applications of artificial intelligence
A multilayered neuro-fuzzy classifier with self-organizing properties
Fuzzy Sets and Systems
Using Boosting to prune Double-Bagging ensembles
Computational Statistics & Data Analysis
Computational Statistics & Data Analysis
Artificial Intelligence Review
Creating ensembles of classifiers via fuzzy clustering and deflection
Fuzzy Sets and Systems
An empirical study of the convergence of regionboost
ICIC'09 Proceedings of the Intelligent computing 5th international conference on Emerging intelligent computing technology and applications
The effect of distance metrics on boosting with dynamic weighting schemes
FSKD'09 Proceedings of the 6th international conference on Fuzzy systems and knowledge discovery - Volume 1
A D-GMDH model for time series forecasting
Expert Systems with Applications: An International Journal
Benchmarking local classification methods
Computational Statistics
Hi-index | 0.03 |
Based on the boosting-by-resampling version of Adaboost, a local boosting algorithm for dealing with classification tasks is proposed in this paper. Its main idea is that in each iteration, a local error is calculated for every training instance and a function of this local error is utilized to update the probability that the instance is selected to be part of next classifier's training set. When classifying a novel instance, the similarity information between it and each training instance is taken into account. Meanwhile, a parameter is introduced into the process of updating the probabilities assigned to training instances so that the algorithm can be more accurate than Adaboost. The experimental results on synthetic and several benchmark real-world data sets available from the UCI repository show that the proposed method improves the prediction accuracy and the robustness to classification noise of Adaboost. Furthermore, the diversity-accuracy patterns of the ensemble classifiers are investigated by kappa-error diagrams.