Representing Probabilistic Rules with Networks of GaussianBasis Functions

  • Authors:
  • Volker Tresp;Jü/rgen Hollatz;Subutai Ahmad

  • Affiliations:
  • Siemens AG, Central Research, 81730 Mü/nchen, Germany/ E-mail: Volker.Tresp@mchp.siemens.de;Siemens AG, Central Research, 81730 Mü/unchen, Germany/ E-mail: Juergen.Hollatz@mchp.siemens.de;Interval Research Corporation, 1801-C Page Mill Rd., Palo Alto, CA 94304/ E-mail: ahmad@interval.com

  • Venue:
  • Machine Learning
  • Year:
  • 1997

Quantified Score

Hi-index 0.00

Visualization

Abstract

There is great interest in understanding the intrinsic knowledgeneural networks have acquired during training. Most work in thisdirection is focussed on the multi-layer perceptron architecture. Thetopic of this paper is networks of Gaussian basis functions which areused extensively as learning systems in neural computation. We showthat networks of Gaussian basis functions can be generated fromsimple probabilistic rules. Also, if appropriate learning rules areused, probabilistic rules can be extracted from trained networks. Wepresent methods for the reduction of network complexity with the goalof obtaining concise and meaningful rules. We show how priorknowledge can be refined or supplemented using data by employingeither a Bayesian approach, by a weighted combination of knowledgebases, or by generating artificial training data representing theprior knowledge. We validate our approach using a standardstatistical data set.