Context change detection for resource allocation in service-oriented systems
KES'11 Proceedings of the 15th international conference on Knowledge-based and intelligent information and engineering systems - Volume Part II
Pattern change discovery between high dimensional data sets
Proceedings of the 20th ACM international conference on Information and knowledge management
Detecting change via competence model
ICCBR'10 Proceedings of the 18th international conference on Case-Based Reasoning Research and Development
Drift detection and model selection algorithms: concept and experimental evaluation
HAIS'12 Proceedings of the 7th international conference on Hybrid Artificial Intelligent Systems - Volume Part II
Unexpected challenges in large scale machine learning
Proceedings of the 1st International Workshop on Big Data, Streams and Heterogeneous Source Mining: Algorithms, Systems, Programming Models and Applications
Tracking concept drift in malware families
Proceedings of the 5th ACM workshop on Security and artificial intelligence
Learning and transferring geographically weighted regression trees across time
MSM'11 Proceedings of the 2011 international conference on Modeling and Mining Ubiquitous Social Media
RCD: A recurring concept drift framework
Pattern Recognition Letters
Event identification for local areas using social media streaming data
Proceedings of the ACM SIGMOD Workshop on Databases and Social Networks
A survey on concept drift adaptation
ACM Computing Surveys (CSUR)
Concept drift detection via competence models
Artificial Intelligence
Hi-index | 0.00 |
An established method to detect concept drift in data streams is to perform statistical hypothesis testing on the multivariate data in the stream. The statistical theory offers rank-based statistics for this task. However, these statistics depend on a fixed set of characteristics of the underlying distribution. Thus, they work well whenever the change in the underlying distribution affects the properties measured by the statistic, but they perform not very well, if the drift influences the characteristics caught by the test statistic only to a small degree. To address this problem, we show how uniform convergence bounds in learning theory can be adjusted for adaptive concept drift detection. In particular, we present three novel drift detection tests, whose test statistics are dynamically adapted to match the actual data at hand. The first one is based on a rank statistic on density estimates for a binary representation of the data, the second compares average margins of a linear classifier induced by the 1-norm support vector machine (SVM), and the last one is based on the average zero-one, sigmoid or stepwise linear error rate of an SVM classifier. We compare these new approaches with the maximum mean discrepancy method, the StreamKrimp system, and the multivariate Wald–Wolfowitz test. The results indicate that the new methods are able to detect concept drift reliably and that they perform favorably in a precision-recall analysis. Copyright © 2009 Wiley Periodicals, Inc. Statistical Analysis and Data Mining 2: 311-327, 2009