Learning Nonoverlapping Perceptron Networks from Examples and Membership Queries

  • Authors:
  • Thomas R. Hancock;Mostefa Golea;Mario Marchand

  • Affiliations:
  • Siemens Corporate Research, 755 College Road East, Princeton, NJ 08540. hancock@learning.siemens.com;Ottawa-Carleton Institute for Physics, University of Ottawa, Ottawa, Ont., Canada KIN 6N5. golea@physics.uottawa.ca;Ottawa-Carleton Institute for Physics, University of Ottawa, Ottawa, Ont., Canada KIN 6N5. mario@physics.uottawa.ca

  • Venue:
  • Machine Learning
  • Year:
  • 1994

Quantified Score

Hi-index 0.04

Visualization

Abstract

We investigate, within the PAC learning model, the problem of learning nonoverlapping perceptron networks (also known as read-once formulas over a weighted threshold basis). These are loop-free neural nets in which each node has only one outgoing weight. We give a polynomial time algorithm that PAC learns any nonoverlapping perceptron network using examples and membership queries. The algorithm is able to identify both the architecture and the weight values necessary to represent the function to be learned. Our results shed some light on the effect of the overlap on the complexity of learning in neural networks.