Analysis of the internal representations in neural networks for machine intelligence

  • Authors:
  • Lai-Wan Chan

  • Affiliations:
  • Computer Science Department, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong

  • Venue:
  • AAAI'91 Proceedings of the ninth National conference on Artificial intelligence - Volume 2
  • Year:
  • 1991

Quantified Score

Hi-index 0.00

Visualization

Abstract

The internal representation of the training patterns of multi-layer perceptrons was examined and we demonstrated that the connection weights between layers are effectively transforming the representation format of the information from one layer to another one in a meaningful way. The internal code, which can be in analog or binary form, is found to be dependent on a number of factors, including the choice of an appropriate representation of the training patterns, the similarities between the patterns as well as the network structure; i.e. the number of hidden layers and the number of hidden units in each layer.