Automatic assessment of oral language proficiency and listening comprehension

  • Authors:
  • F. de Wet;C. Van der Walt;T. R. Niesler

  • Affiliations:
  • Centre for Language and Speech Technology, Stellenbosch University, South Africa;Department of Curriculum Studies, Stellenbosch University, South Africa;Department of Electrical and Electronic Engineering, Stellenbosch University, South Africa

  • Venue:
  • Speech Communication
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper describes an attempt to automate the large-scale assessment of oral language proficiency and listening comprehension for fairly advanced students of English as a second language. The automatic test is implemented as a spoken dialogue system and consists of a reading as well as a repeating task. Two experiments are described in which different rating criteria were used by human judges. In the first experiment, proficiency was scored globally for each of the two test components. In the second experiment, various aspects of proficiency were evaluated for each section of the test. In both experiments, rate of speech (ROS), goodness of pronunciation (GOP) and repeat accuracy were calculated for the spoken utterances. The correlation between scores assigned by human raters and these three automatically derived measures was determined to assess their suitability as proficiency indicators. Results show that the more specific rating instructions used in the second experiment improved intra-rater agreement, but made little difference to inter-rater agreement. In addition, the more specific rating criteria resulted in a better correlation between the human and the automatic scores for the repeating task, but had almost no impact in the reading task. Overall, the results indicate that, even for the narrow range of proficiency levels observed in the test population, the automatically derived ROS and accuracy scores give a fair indication of oral proficiency.