Rules and generalization capacity extraction from ANN with GP

  • Authors:
  • Juan R. Rabuñal;Julián Dorado;Alejandro Pazos;Daniel Rivero

  • Affiliations:
  • Univ. da Coruña, Facultad Informática, Coruña, Spain;Univ. da Coruña, Facultad Informática, Coruña, Spain;Univ. da Coruña, Facultad Informática, Coruña, Spain;Univ. da Coruña, Facultad Informática, Coruña, Spain

  • Venue:
  • IWANN'03 Proceedings of the Artificial and natural neural networks 7th international conference on Computational methods in neural modeling - Volume 1
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

In language engineering, language models are employed in order to improve system performance. These language models are usually N-gram models which are estimated from large text databases using the occurrence frequencies of these N-grams. An alternative to conventional frequency-based estimation of N-gram probabilities consists in using neural networks to this end. These "connectionist N-gram models", although their training is very time-consuming, present a pair of interesting advantages over the conventional approach: networks provide an implicit smoothing in their estimations and the number of free parameters does not grow exponentially with N. Some experimental works provide empirical evidence on the capability of multilayer perceptrons and simple recurrent networks to emulate N-gram models, and proposes new directions for extending neural networks-based language models.