On state-space abstraction for anytime evaluation of Bayesian networks
ACM SIGART Bulletin
R-trees: a dynamic index structure for spatial searching
SIGMOD '84 Proceedings of the 1984 ACM SIGMOD international conference on Management of data
Mining concept-drifting data streams using ensemble classifiers
Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining
Anytime Classification Using the Nearest Neighbor Algorithm with Applications to Stream Mining
ICDM '06 Proceedings of the Sixth International Conference on Data Mining
Anytime search in dynamic graphs
Artificial Intelligence
Indexing density models for incremental learning and anytime classification on data streams
Proceedings of the 12th International Conference on Extending Database Technology: Advances in Database Technology
Anytime measures for top-k algorithms on exact and fuzzy data sets
The VLDB Journal — The International Journal on Very Large Data Bases
Self-Adaptive Anytime Stream Clustering
ICDM '09 Proceedings of the 2009 Ninth IEEE International Conference on Data Mining
MC-tree: Improving Bayesian anytime classification
SSDBM'10 Proceedings of the 22nd international conference on Scientific and statistical database management
Large margin learning of Bayesian classifiers based on Gaussian mixture models
ECML PKDD'10 Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part III
Anytime learning of anycost classifiers
Machine Learning
Estimating continuous distributions in Bayesian classifiers
UAI'95 Proceedings of the Eleventh conference on Uncertainty in artificial intelligence
Naive bayes classifiers that perform well with continuous variables
AI'04 Proceedings of the 17th Australian joint conference on Advances in Artificial Intelligence
An inequality for rational functions with applications to some statistical estimation problems
IEEE Transactions on Information Theory
Hi-index | 0.00 |
In many scientific disciplines experimental data is generated at high rates resulting in a continuous stream of data. Data bases of previous measurements can be used to train classifiers that categorize newly incoming data. However, the large size of the training set can yield high classification times, e.g. for approaches that rely on nearest neighbors or kernel density estimation. Anytime algorithms circumvent this problem since they can be interrupted at will while their performance increases with additional computation time. Two important quality criteria for anytime classifiers are high accuracies for arbitrary time allowances and monotonic increase of the accuracy over time. The Bayes tree has been proposed as a naive Bayesian approach to anytime classification based on kernel density estimation. However, the employed decision process often results in an oscillating accuracy performance over time. In this paper we propose the BT* method and show in extensive experiments that it outperforms previous methods in both monotonicity and anytime accuracy and yields near perfect results on a wide range of domains.