Non-scorable response detection for automated speaking proficiency assessment

  • Authors:
  • Su-Youn Yoon;Keelan Evanini;Klaus Zechner

  • Affiliations:
  • Educational Testing Service, Princeton, NJ;Educational Testing Service, Princeton, NJ;Educational Testing Service, Princeton, NJ

  • Venue:
  • IUNLPBEA '11 Proceedings of the 6th Workshop on Innovative Use of NLP for Building Educational Applications
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a method that filters out non-scorable (NS) responses, such as responses with a technical difficulty, in an automated speaking proficiency assessment system. The assessment system described in this study first filters out the non-scorable responses and then predicts a proficiency score using a scoring model for the remaining responses. The data were collected from non-native speakers in two different countries, using two different item types in the proficiency assessment: items that elicit spontaneous speech and items that elicit recited speech. Since the proportion of NS responses and the features available to the model differ according to the item type, an item type specific model was trained for each item type. The accuracy of the models ranged between 75% and 79% in spontaneous speech items and between 95% and 97% in recited speech items. Two different groups of features, signal processing based features and automatic speech recognition (ASR) based features, were implemented. The ASR based models achieved higher accuracy than the non-ASR based models.