Symmetries and discriminability in feedforward network architectures

  • Authors:
  • J. Shawe-Taylor

  • Affiliations:
  • Dept. of Comput. Sci., R. Holloway & Bedford New Coll., Egham

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 1993

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper investigates the effects of introducing symmetries into feedforward neural networks in what are termed symmetry networks. This technique allows more efficient training for problems in which we require the output of a network to be invariant under a set of transformations of the input. The particular problem of graph recognition is considered. In this case the network is designed to deliver the same output for isomorphic graphs. This leads to the question of which inputs can be distinguished by such architectures. A theorem characterizing when two inputs can be distinguished by a symmetry network is given. As a consequence, a particular network design is shown to be able to distinguish nonisomorphic graphs if and only if the graph reconstruction conjecture holds