Mining high-speed data streams
Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining
Mining time-changing data streams
Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining
Maintaining variance and k-medians over data stream windows
Proceedings of the twenty-second ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems
Accurate decision trees for mining high-speed data streams
Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining
Proceedings of the 2004 ACM symposium on Applied computing
Discovering decision rules from numerical data streams
Proceedings of the 2004 ACM symposium on Applied computing
YALE: rapid prototyping for complex data mining tasks
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
Bias management of bayesian network classifiers
DS'05 Proceedings of the 8th international conference on Discovery Science
A frequent pattern based framework for event detection in sensor network stream data
Proceedings of the Third International Workshop on Knowledge Discovery from Sensor Data
AIMSA'10 Proceedings of the 14th international conference on Artificial intelligence: methodology, systems, and applications
Clustering distributed sensor data streams using local processing and reduced communication
Intelligent Data Analysis - Ubiquitous Knowledge Discovery
Framework for stream learning algorithms
International Journal of Computational Intelligence Studies
Hi-index | 0.00 |
Learning from data streams is a research area of increasing importance. Nowadays, several stream learning algorithms have been developed. Most of them learn decision models that continuously evolve over time, run in resource-aware environments, and detect and react to changes in the environment generating data. One important issue, not yet conveniently addressed, is the design of experimental work to evaluate and compare decision models that evolve over time. In this paper we propose a general framework for assessing the quality of streaming learning algorithms. We defend the use of Predictive Sequential error estimates over a sliding window to assess performance of learning algorithms that learn from open-ended data streams in non-stationary environments. This paper studies properties of convergence and methods to comparatively assess algorithms performance.