Machine Learning - Special issue on context sensitivity and concept drift
Separate-and-Conquer Rule Learning
Artificial Intelligence Review
Counting Models Using Connected Components
Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence
Learning of Boolean Functions Using Support Vector Machines
ALT '01 Proceedings of the 12th International Conference on Algorithmic Learning Theory
On approximating weighted sums with exponentially many terms
Journal of Computer and System Sciences
Maximum Margin Algorithms with Boolean Kernels
The Journal of Machine Learning Research
Prediction, Learning, and Games
Prediction, Learning, and Games
Online Passive-Aggressive Algorithms
The Journal of Machine Learning Research
On probabilistic inference by weighted model counting
Artificial Intelligence
Performing Bayesian inference by weighted model counting
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 1
Efficiency versus convergence of Boolean kernels for on-line learning algorithms
Journal of Artificial Intelligence Research
Probabilistic planning via heuristic forward search and weighted model counting
Journal of Artificial Intelligence Research
The good old Davis-Putnam procedure helps counting models
Journal of Artificial Intelligence Research
sharpSAT: counting models with advanced component caching and implicit BCP
SAT'06 Proceedings of the 9th international conference on Theory and Applications of Satisfiability Testing
Hi-index | 0.00 |
Online multiplicative weight-update learning algorithms, such as Winnow, have proven to behave remarkably for learning simple disjunctions with few relevant attributes. The aim of this paper is to extend the Winnow algorithm to more expressive concepts characterized by DNF formulas with few relevant rules. For such problems, the convergence of Winnow is still fast, since the number of mistakes increases only linearly with the number of attributes. Yet, the learner is confronted with an important computational barrier: during any prediction, it must evaluate the weighted sum of an exponential number of rules. To circumvent this issue, we convert the prediction problem into a Weighted Model Counting problem. The resulting algorithm, SharpNow, is an exact simulation of Winnow equipped with backtracking, caching, and decomposition techniques. Experiments on static and drifting problems demonstrate the performance of the algorithm in terms of accuracy and speed.