Learning hierarchical rule sets
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Learning in the presence of finitely or infinitely many irrelevant attributes
Journal of Computer and System Sciences
Optimal pooling designs with error detection
Journal of Combinatorial Theory Series A
Attribute-efficient learning in query and mistake-bound models
COLT '96 Proceedings of the ninth annual conference on Computational learning theory
Group testing with unreliable tests
Information Sciences: an International Journal
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Randomized group testing for mutually obscuring defectives
Information Processing Letters
Improved algorithms for group testing with inhibitors
Information Processing Letters
On the cut-off point for combinatorial group testing
Discrete Applied Mathematics
Lower bounds for identifying subset members with subset queries
Proceedings of the sixth annual ACM-SIAM symposium on Discrete algorithms
Classical versus quantum communication complexity
ACM SIGACT News
Computational sample complexity and attribute-efficient learning
Journal of Computer and System Sciences
Adaptive Versus Nonadaptive Attribute-Efficient Learning
Machine Learning
Optimal Attribute-Efficient Learning of Disjunction, Parity and Threshold Functions
EuroCOLT '97 Proceedings of the Third European Conference on Computational Learning Theory
Group Testing Problems with Sequences in Experimental Molecular Biology
SEQUENCES '97 Proceedings of the Compression and Complexity of Sequences 1997
PAC learning with irrelevant attributes
SFCS '94 Proceedings of the 35th Annual Symposium on Foundations of Computer Science
Competitive group testing and learning hidden vertex covers with minimum adaptivity
FCT'09 Proceedings of the 17th international conference on Fundamentals of computation theory
General Theory of Information Transfer and Combinatorics
Hi-index | 0.00 |
This paper continues our earlier work on (non)adaptive attribute-efficient learning. We consider exact learning of Boolean functions of n variables by membership queries, assuming that at most r variables are relevant. The learner works in consecutive rounds, such that the set of simultaneous queries in every round may depend on all information gained so far. For deterministic learning of specific monotone functions we prove that any strategy that uses an optimal query number needs Θ(r) rounds in the worst case. Furthermore, we make some progress regarding the constant factors in nearly query-optimal strategies. For example, we propose a strategy using roughly 2r+1 + 2η log2n queries in 3r rounds. In contrast to the limitations of deterministic strategies, there is a randomized strategy that learns monotone functions by 2O(r) + O(r log n) expected queries in O(log r) expected rounds. Actually, this result holds in more general function classes. The second part of the paper addresses the computational complexity of parallel learning of arbitrary Boolean functions with r relevant variables. We obtain several strategies which use a constant number of rounds, O(2rpoly(r log n)) queries, and only 2O(r)n poly(log n) computations.