Towards a unified theory of spoken language processing

  • Authors:
  • R. K. Moore

  • Affiliations:
  • Dept. Comput. Sci.,, Sheffield Univ., UK

  • Venue:
  • ICCI '05 Proceedings of the Fourth IEEE International Conference on Cognitive Informatics
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Spoken language processing is arguably the most sophisticated behavior of the most complex organism in the known universe and, unsurprisingly, scientists still have much to learn about how it works. Meanwhile, automated spoken language processing systems have begun to emerge in commercial applications, not as a result of any deep insights into the way in which humans process language, but largely as a consequence of the introduction of a 'data-driven' approach to building practical systems. At the same time, computational models of human spoken language processing have begun to emerge but, although this has given rise to a greater interest in the relationship between human and machine behavior, the performance of the best models appears to be asymptoting some way short of the capabilities of the human listener/speaker. This paper discusses these issues, and presents an argument in favor of the derivation of a 'unifying theory' that would be capable of explaining and predicting both human and machine spoken language processing behavior, and hence serve both communities as well as representing a long-term 'grand challenge' for the scientific community in the emerging field of 'cognitive informatics'.