Mining the Web for Synonyms: PMI-IR versus LSA on TOEFL
EMCL '01 Proceedings of the 12th European Conference on Machine Learning
Verbs semantics and lexical selection
ACL '94 Proceedings of the 32nd annual meeting on Association for Computational Linguistics
Identifying off-topic student essays without topic-specific training data
Natural Language Engineering
An overview of spoken language technology for education
Speech Communication
Text-to-text semantic similarity for automatic short answer grading
EACL '09 Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics
Measuring the semantic similarity of texts
EMSEE '05 Proceedings of the ACL Workshop on Empirical Modeling of Semantic Equivalence and Entailment
The WEKA data mining software: an update
ACM SIGKDD Explorations Newsletter
Distributional representations for handling sparsity in supervised sequence-labeling
ACL '09 Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1 - Volume 1
Hi-index | 0.00 |
Accuracy of content have not been fully utilized in the previous studies on automated speaking assessment. Compared to writing tests, responses in speaking tests are noisy (due to recognition errors), full of incomplete sentences, and short. To handle these challenges for doing content-scoring in speaking tests, we propose two new methods based on information extraction (IE) and machine learning. Compared to using an ordinary content-scoring method based on vector analysis, which is widely used for scoring written essays, our proposed methods provided content features with higher correlations to human holistic scores.