Improving generalization of neural networks using multilayer perceptron discriminants

  • Authors:
  • Fadzilah Siraj;Derek Partridge

  • Affiliations:
  • School of Information Technology, University Utara Malaysia, 06010 Sintok, Kedah;Computer Science Department, University of Exeter, Exeter EX4 4PT, England

  • Venue:
  • Systems Analysis Modelling Simulation - Special issue: Advances in control and computer engineering
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper discusses the empirical evaluation of improving generalization performance of neural networks by systematic treatment of training and test failures. As a result of systematic treatment of failures, multilayer perceptron (MLP) discriminants were developed as discrimination techniques. The experiments presented in this paper illustrate the application of discrimination techniques using MLP discriminants to neural networks trained to solve supervised learning task such as the Launch Interceptor Condition 1 problem. The MLP discriminants were constructed from the training and test patterns. The first discriminant is known as the hard-to-learn and easy-to-learn discriminant whilst the second one is known as hard-to-compute and easy-to-compute discriminant. Further treatments were also applied to hard-to-learn (or hard-to-compute) patterns prior to training (or testing). The experimental results reveal that directed splitting or using MLP discriminant is an important strategy in improving generalization of the networks.