C4.5: programs for machine learning
C4.5: programs for machine learning
Automatic labeling of semantic roles
Computational Linguistics
ECML '95 Proceedings of the 8th European Conference on Machine Learning
Mining the Web for Synonyms: PMI-IR versus LSA on TOEFL
EMCL '01 Proceedings of the 12th European Conference on Machine Learning
Class-Based Construction of a Verb Lexicon
Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence
Discovery of inference rules for question-answering
Natural Language Engineering
The Proposition Bank: An Annotated Corpus of Semantic Roles
Computational Linguistics
Deterministic dependency parsing of English text
COLING '04 Proceedings of the 20th international conference on Computational Linguistics
Learner answer assessment in intelligent tutoring systems
Learner answer assessment in intelligent tutoring systems
The Andes Physics Tutoring System: Five Years of Evaluations
Proceedings of the 2005 conference on Artificial Intelligence in Education: Supporting Learning through Intelligent and Socially Informed Technology
Labeled pseudo-projective dependency parsing with support vector machines
CoNLL-X '06 Proceedings of the Tenth Conference on Computational Natural Language Learning
Automatic short answer marking
EdAppsNLP 05 Proceedings of the second workshop on Building Educational Applications Using NLP
Modeling students' metacognitive errors in two intelligent tutoring systems
UM'05 Proceedings of the 10th international conference on User Modeling
Hi-index | 0.00 |
This paper presents a process for automatically extracting a fine-grained semantic representation of a learner's response to a tutor's question. The representation can be extracted using available natural language processing technologies and it allows a detailed assessment of the learner's understanding and consequently will support the evaluation of tutoring pedagogy that is dependent on such a fine-grained assessment. We describe a system to assess student answers at this fine-grained level that utilizes features extracted from the automatically generated representations. The system classifies answers to indicate the student's apparent understanding of each of the low-level facets of a known reference answer. It achieves an accuracy on these fine-grained decisions of 76% on within-domain assessment and 69% out of domain.