SUTC '06 Proceedings of the IEEE International Conference on Sensor Networks, Ubiquitous, and Trustworthy Computing - Vol 2 - Workshops - Volume 02
DIWEB'06 Proceedings of the 5th WSEAS International Conference on Distance Learning and Web Engineering
Multiple instance learning for classifying students in learning management systems
Expert Systems with Applications: An International Journal
Effect of answer format and review method on college students' learning
Computers & Education
Hi-index | 0.00 |
The aim of the present research was to compare the use of multiple-choice questions (MCQs) as an examination method, to the examination based on constructed-response questions (CRQs). Despite that MCQs have an advantage concerning objectivity in the grading process and speed in production of results, they also introduce an error in the final formulation of the score. The error is traced to the probability of answering a question by chance or based on an instinctive feeling, which does not enable the ascertainment of the knowledge of the whole background included in the question. In the present study, both MCQ and CRQ tests were given to examinees, in the framework of a computer-based learning system. Avoiding the procedure of mixed scoring, e.g. both positive and negative markings, a set of pairs of MCQs was composed. The MCQs in each pair were similar concerning the same topic, but this similarity was not evident for an examinee that did not possess adequate knowledge on the topic addressed in the questions of the pair. The examination based on these ''paired'' MCQs, by using a suitable scoring rule, when made to the same sample of students, on the same topics and with the same levels of difficulty, gave results that were statistically indistinguishable with the grades produced by an examination based on CRQs, while both the ''paired'' MCQ test results and the CRQ test results differed significantly from those obtained from a MCQ test using positive-only scoring rule.