Multiple source phoneme recognition aided by articulatory features

  • Authors:
  • Mark Kane;Julie Carson-Berndsen

  • Affiliations:
  • CNGL, School of Computer Science and Informatics, University College Dublin, Ireland;CNGL, School of Computer Science and Informatics, University College Dublin, Ireland

  • Venue:
  • IEA/AIE'11 Proceedings of the 24th international conference on Industrial engineering and other applications of applied intelligent systems conference on Modern approaches in applied intelligence - Volume Part II
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents an experiment in speech recognition whereby multiple phoneme recognisers are applied to the same utterance. When these recognisers agree on an hypothesis for the same time interval, that hypothesis is assumed to be correct. When they are in disagreement, fine-grained phonetic features, called articulatory features, recognised from the same speech utterance are used to create an articulatory feature-based phoneme. If the output of either of the phoneme recognisers for that interval matches the articulatory feature-based phoneme, then that phoneme is selected as an hypothesis for the interval. Underspecification of the articulatory feature-based phoneme is implemented if an hypothesis is not found and the matching process is repeated. The results of the experiment show that the final output accuracy is greater than both of the initial two phoneme recognisers.