Human or Computer? AutoTutor in a Bystander Turing Test
ITS '02 Proceedings of the 6th International Conference on Intelligent Tutoring Systems
Automatic question generation from text - an aid to independent study
SIGCSE '76 Proceedings of the ACM SIGCSE-SIGCUE technical symposium on Computer science and education
Computer-aided generation of multiple-choice tests
HLT-NAACL-EDUC '03 Proceedings of the HLT-NAACL 03 workshop on Building educational applications using natural language processing - Volume 2
Glosser: Enhanced Feedback for Student Writing Tasks
ICALT '08 Proceedings of the 2008 Eighth IEEE International Conference on Advanced Learning Technologies
Using Intelligent Feedback to Improve Sourcing and Integration in Students' Essays
International Journal of Artificial Intelligence in Education
Experiments on Generating Questions About Facts
CICLing '07 Proceedings of the 8th International Conference on Computational Linguistics and Intelligent Text Processing
Design challenges and misconceptions in named entity recognition
CoNLL '09 Proceedings of the Thirteenth Conference on Computational Natural Language Learning
Evidence-based information extraction for high accuracy citation and author name identification
Large Scale Semantic Access to Content (Text, Image, Video, and Sound)
Automatic question generation for learning evaluation in medicine
ICWL'07 Proceedings of the 6th international conference on Advances in web based learning
Question taxonomy and implications for automatic question generation
AIED'11 Proceedings of the 15th international conference on Artificial intelligence in education
Hi-index | 0.00 |
This paper presents a novel Automatic Question Generation (AQG) approach that generates trigger questions as a form of support for students' learning through writing. The approach first automatically extracts citations from students' compositions together with key content elements. Next, the citations are classified using a rule-based approach and questions are generated based on a set of templates and the content elements. A pilot study using the Bystander Turing Test investigated differences in writers' perception between questions generated by our AQG system and humans (Human Tutor, Lecturer, or Generic Question). It is found that the human evaluators have moderate difficulties distinguishing questions generated by the proposed system from those produced by human (F-score=0.43). Moreover, further results show that our system significantly outscores Generic Question on overall quality measures.