Communications of the ACM
Cooling schedules for optimal annealing
Mathematics of Operations Research
Simulated annealing and Boltzmann machines: a stochastic approach to combinatorial optimization and neural computing
Learning in the presence of malicious errors
STOC '88 Proceedings of the twentieth annual ACM symposium on Theory of computing
Learnable and Nonlearnable Visual Concepts
IEEE Transactions on Pattern Analysis and Machine Intelligence
Learning DNF under the uniform distribution in quasi-polynomial time
COLT '90 Proceedings of the third annual workshop on Computational learning theory
An O(nlog log n) learning algorithm for DNF under the uniform distribution
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
On the limits of proper learnability of subclasses of DNF formulas
COLT '94 Proceedings of the seventh annual conference on Computational learning theory
On the Learnability of Disjunctive Normal Form Formulas
Machine Learning
DNF—if you can't learn'em, teach'em: an interactive model of teaching
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
On learning visual concepts and DNF formulae
Machine Learning
Journal of Complexity - Special issue for the Foundations of Computational Mathematics conference, Rio de Janeiro, Brazil, Jan. 1997
Machine Learning
Machine Learning
Machine Learning
Machine Learning
An efficient membership-query algorithm for learning DNF with respect to the uniform distribution
SFCS '94 Proceedings of the 35th Annual Symposium on Foundations of Computer Science
Learning disjunction of conjunctions
IJCAI'85 Proceedings of the 9th international joint conference on Artificial intelligence - Volume 1
Hi-index | 0.00 |
We describe a stochastic algorithm learning Boolean functions from positive and negative examples. The Boolean functions are represented by disjunctive normal form formulas. Given a target DNF F depending on n variables and a set of uniformly distributed positive and negative examples, our algorithm computes a hypothesis H that rejects a given fraction of negative examples and has an Ɛ-bounded error on positive examples. The stochastic algorithm utilises logarithmic cooling schedules for inhomogeneous Markov chains. The paper focuses on experimental results and comparisons with a previous approach where all negative examples have to be rejected [4]. The computational experiments provide evidence that a relatively high percentage of correct classifications on additionally presented examples can be achieved, even when misclassifications are allowed on negative examples. The detailed convergence analysis will be presented in a forthcoming paper [3].