Interpolation and approximation of sparse multivariate polynomials over GF(2)
SIAM Journal on Computing
Learning binary relations and total orders
SIAM Journal on Computing
Journal of Computer and System Sciences
On specifying Boolean functions by labelled examples
Discrete Applied Mathematics
Witness sets for families of binary vectors
Journal of Combinatorial Theory Series A
Learning sparse multivariate polynomials over a field with queries and counterexamples
Journal of Computer and System Sciences
Simple Learning Algorithms for Decision Trees and Multivariate Polynomials
SIAM Journal on Computing
On Teaching and Learning Intersection-Closed Concept Classes
EuroCOLT '99 Proceedings of the 4th European Conference on Computational Learning Theory
Learnability and Automatizability
FOCS '04 Proceedings of the 45th Annual IEEE Symposium on Foundations of Computer Science
Learning functions of k relevant variables
Journal of Computer and System Sciences - Special issue: STOC 2003
Teaching classes with high teaching dimension using few examples
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Measuring teachability using variants of the teaching dimension
Theoretical Computer Science
Recent Developments in Algorithmic Teaching
LATA '09 Proceedings of the 3rd International Conference on Language and Automata Theory and Applications
Teaching randomized learners with feedback
Information and Computation
Teaching memoryless randomized learners without feedback
ALT'06 Proceedings of the 17th international conference on Algorithmic Learning Theory
Hi-index | 0.00 |
We study the average number of well-chosen labeled examples that are required for a helpful teacher to uniquely specify a target function within a concept class. This “average teaching dimension” has been studied in learning theory and combinatorics and is an attractive alternative to the “worst-case” teaching dimension of Goldman and Kearns [7] which is exponential for many interesting concept classes. Recently Balbach [3] showed that the classes of 1-decision lists and 2-term DNF each have linear average teaching dimension. As our main result, we extend Balbach’s teaching result for 2-term DNF by showing that for any 1 ≤s ≤2$^{\Theta({\it n})}$, the well-studied concept classes of at-most-s-term DNF and at-most-s-term monotone DNF each have average teaching dimension O(ns). The proofs use detailed analyses of the combinatorial structure of “most” DNF formulas and monotone DNF formulas. We also establish asymptotic separations between the worst-case and average teaching dimension for various other interesting Boolean concept classes such as juntas and sparse GF2 polynomials.