Connectionist architectural learning for high performance character and speech recognition

  • Authors:
  • Ulrich Bodenhausen;Stefan Manke

  • Affiliations:
  • Computer Science Department, University of Karlsruhe, Karlsruhe, FRG and School of Computer Science, Carnegie Mellon University, Pittsburgh, PA;Computer Science Department, University of Karlsruhe, Karlsruhe, FRG and School of Computer Science, Carnegie Mellon University, Pittsburgh, PA

  • Venue:
  • ICASSP'93 Proceedings of the 1993 IEEE international conference on Acoustics, speech, and signal processing: plenary, special, audio, underwater acoustics, VLSI, neural networks - Volume I
  • Year:
  • 1993

Quantified Score

Hi-index 0.00

Visualization

Abstract

Highly structured neural networks like the Time-Delay Neural Network (TDNN) can achieve very high recognition accuracies in real world applications like handwritten character and speech recognition systems. Achieving the best possible performance greatly depends on the optimization of all structural parameters for the given task and amount of training data. We propose an Automatic Structure Optimization (ASO) algorithm that avoids time-consuming manual optimization and apply it to Multi State Time-Delay Neural Networks, a recent extension of the TDNN. We show that the ASO algorithm can construct efficient architectures in a single training run that achieve very high recognition acuracies for two handwritten character recognition tasks and one speech recognition task.