Understanding mental states in natural language
IWCS-8 '09 Proceedings of the Eighth International Conference on Computational Semantics
Marker-Passing inference in the scone knowledge-base system
KSEM'06 Proceedings of the First international conference on Knowledge Science, Engineering and Management
My science tutor: A conversational multimedia virtual tutor for elementary school science
ACM Transactions on Speech and Language Processing (TSLP)
ITS'10 Proceedings of the 10th international conference on Intelligent Tutoring Systems - Volume Part I
Using information extraction to generate trigger questions for academic writing support
ITS'12 Proceedings of the 11th international conference on Intelligent Tutoring Systems
Recognizing Young Readers' Spoken Questions
International Journal of Artificial Intelligence in Education
Hi-index | 0.00 |
Self-questioning is an important reading comprehension strategy, so it would be useful for an intelligent tutor to help students apply it to any given text. Our goal is to help children generate questions that make them think about the text in ways that improve their comprehension and retention. However, teaching and scaffolding self-questioning involve analyzing both the text and the students' responses. This requirement poses a tricky challenge to generating such instruction automatically, especially for children too young to respond by typing. This paper describes how to generate self-questioning instruction for an automated reading tutor. Following expert pedagogy, we decompose strategy instruction into describing, modeling, scaffolding, and prompting the strategy. We present a working example to illustrate how we generate each of these four phases of instruction for a given text. We identify some relevant criteria and use them to evaluate the generated instruction on a corpus of 513 children's stories.