Machine Learning
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Decision Tree Induction Based on Efficient Tree Restructuring
Machine Learning
Racing Committees for Large Datasets
DS '02 Proceedings of the 5th International Conference on Discovery Science
An introduction to boosting and leveraging
Advanced lectures on machine learning
Real-time ranking with concept drift using expert advice
Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining
New ensemble methods for evolving data streams
Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining
Automatic Database Creation and Object's Model Learning
Knowledge Acquisition: Approaches, Algorithms and Applications
Improving Adaptive Bagging Methods for Evolving Data Streams
ACML '09 Proceedings of the 1st Asian Conference on Machine Learning: Advances in Machine Learning
SERA: selectively recursive approach towards nonstationary imbalanced stream data mining
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Using diversity to handle concept drift in on-line learning
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Adaptive Stream Mining: Pattern Learning and Mining from Evolving Data Streams
Proceedings of the 2010 conference on Adaptive Stream Mining: Pattern Learning and Mining from Evolving Data Streams
Leveraging bagging for evolving data streams
ECML PKDD'10 Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part I
A new framework for on-line object tracking based on SURF
Pattern Recognition Letters
A bounded version of online boosting on open-ended data streams
DaWaK'11 Proceedings of the 13th international conference on Data warehousing and knowledge discovery
Tracking concept change with incremental boosting by minimization of the evolving exponential loss
ECML PKDD'11 Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part I
Ensembles of Restricted Hoeffding Trees
ACM Transactions on Intelligent Systems and Technology (TIST)
Fast perceptron decision tree learning from evolving data streams
PAKDD'10 Proceedings of the 14th Pacific-Asia conference on Advances in Knowledge Discovery and Data Mining - Volume Part II
MCBR-CDS'11 Proceedings of the Second MICCAI international conference on Medical Content-Based Retrieval for Clinical Decision Support
An instance-window based classification algorithm for handling gradual concept drifts
ADMI'11 Proceedings of the 7th international conference on Agents and Data Mining Interaction
On the Evolution of Hardware Circuits via Reconfigurable Architectures
ACM Transactions on Reconfigurable Technology and Systems (TRETS)
Batch-incremental versus instance-incremental learning in dynamic and evolving data
IDA'12 Proceedings of the 11th international conference on Advances in Intelligent Data Analysis
Efficient data stream classification via probabilistic adaptive windows
Proceedings of the 28th Annual ACM Symposium on Applied Computing
Uncertainty in online experiments with dependent data: an evaluation of bootstrap methods
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
Combining block-based and online methods in learning ensembles from concept drifting data streams
Information Sciences: an International Journal
Hi-index | 0.00 |
Bagging and boosting are well-known ensemble learning methods. They combine multiple learned base models with the aim of improving generalization performance. To date, they have been used primarily in batch mode, i.e., they require multiple passes through the training data. In previous work, we presented online bagging and boosting algorithms that only require one pass through the training data and presented experimental results on some relatively small datasets. Through additional experiments on a variety of larger synthetic and real datasets, this paper demonstrates that our online versions perform comparably to their batch counterparts in terms of classification accuracy. We also demonstrate the substantial reduction in running time we obtain with our online algorithms because they require fewer passes through the training data.