A technique for upper bounding the spectral norm with applications to learning

  • Authors:
  • Mihir Bellare

  • Affiliations:
  • IBM T.J. Watson Research Center, P.O. Box 704, Yorktown Heights, NY

  • Venue:
  • COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
  • Year:
  • 1992

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a general technique to upper bound the spectral norm of an arbitrary function. At the heart of our technique is a theorem which shows how to obtain an upper bound on the spectral norm of a decision tree given the spectral norms of the functions in the nodes of this tree. The theorem applies to trees whose nodes may compute any boolean functions. Applications are to the design of efficient learning algorithms and the construction of small depth threshold circuits (or neural nets). In particular, we present polynomial time algorithms for learning O(log n) clause DNF formulas and various classes of decision trees, all under the uniform distribution with membership queries.