On subspaces spanned by random selections of ±1 vectors
Journal of Combinatorial Theory Series A - The Herbert J. Ryser Memorial Issue, Part 1
International Journal of Game Theory
On the Size of Weights for Threshold Gates
SIAM Journal on Discrete Mathematics
Toward Efficient Agnostic Learning
Machine Learning - Special issue on computational learning theory, COLT'92
Vector analysis of threshold functions
Information and Computation
On Restricted-Focus-of-Attention Learnability of Boolean Functions
Machine Learning - Special issue on the ninth annual conference on computational theory (COLT '96)
Learning with restricted focus of attention
Journal of Computer and System Sciences
Chow Parameters in Threshold Logic
Journal of the ACM (JACM)
Hard-core distributions for somewhat hard problems
FOCS '95 Proceedings of the 36th Annual Symposium on Foundations of Computer Science
Agnostically Learning Halfspaces
FOCS '05 Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science
SIAM Journal on Discrete Mathematics
Every Linear Threshold Function has a Low-Weight Approximator
Computational Complexity
Threshold Gate Approximations Based on Chow Parameters
IEEE Transactions on Computers
Chow parameters in pseudothreshold logic
SWAT '73 Proceedings of the 14th Annual Symposium on Switching and Automata Theory (swat 1973)
Regularity, Boosting, and Efficiently Simulating Every High-Entropy Distribution
CCC '09 Proceedings of the 2009 24th Annual IEEE Conference on Computational Complexity
Improved Approximation of Linear Threshold Functions
CCC '09 Proceedings of the 2009 24th Annual IEEE Conference on Computational Complexity
On the characterization of threshold functions
FOCS '61 Proceedings of the 2nd Annual Symposium on Switching Circuit Theory and Logical Design (SWCT 1961)
Agnostic Learning of Monomials by Halfspaces Is Hard
FOCS '09 Proceedings of the 2009 50th Annual IEEE Symposium on Foundations of Computer Science
Approximating linear threshold predicates
APPROX/RANDOM'10 Proceedings of the 13th international conference on Approximation, and 14 the International conference on Randomization, and combinatorial optimization: algorithms and techniques
SIAM Journal on Computing
Bounded Independence Fools Halfspaces
SIAM Journal on Computing
SIAM Journal on Computing
The inverse shapley value problem
ICALP'12 Proceedings of the 39th international colloquium conference on Automata, Languages, and Programming - Volume Part I
Low-weight halfspaces for sparse boolean vectors
Proceedings of the 4th conference on Innovations in Theoretical Computer Science
ICALP'13 Proceedings of the 40th international conference on Automata, Languages, and Programming - Volume Part I
Hi-index | 0.00 |
The Chow parameters of a Boolean function f: {-1,1}n - {-1,1} are its n+1 degree-0 and degree-1 Fourier coefficients. It has been known since 1961 [Cho61, Tan61] that the (exact values of the) Chow parameters of any linear threshold function f uniquely specify f within the space of all Boolean functions, but until recently [OS11] nothing was known about efficient algorithms for reconstructing f (exactly or approximately) from exact or approximate values of its Chow parameters. We refer to this reconstruction problem as the Chow Parameters Problem. Our main result is a new algorithm for the Chow Parameters Problem which, given (sufficiently accurate approximations to) the Chow parameters of any linear threshold function f, runs in time ~O(n2)• (1/ε)O(log2(1/ε)) and with high probability outputs a representation of an LTF f' that is ε-close to f. The only previous algorithm [OS11] had running time poly(n) • 22~O(1/ε2). As a byproduct of our approach, we show that for any linear threshold function f over {-1,1}n, there is a linear threshold function f' which is ε-close to f and has all weights that are integers at most √n • (1/ε)O(log2(1/ε)). This significantly improves the best previous result of [Serv09] which gave a poly(n) • 2O(1/ε2/3) weight bound, and is close to the known lower bound of max{√n, (1/ε)Ω(log log (1/ε)) [Gol06,Serv07]. Our techniques also yield improved algorithms for related problems in learning theory. In addition to being significantly stronger than previous work, our results are obtained using conceptually simpler proofs. The two main ingredients underlying our results are (1) a new structural result showing that for f any linear threshold function and g any bounded function, if the Chow parameters of f are close to the Chow parameters of g then f is close to g; (2) a new boosting-like algorithm that given approximations to the Chow parameters of a linear threshold function outputs a bounded function whose Chow parameters are close to those of f.