The foundation of a generic theorem prover
Journal of Automated Reasoning
An Industrial Strength Theorem Prover for a Logic Based on Common Lisp
IEEE Transactions on Software Engineering
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Usability Engineering
The Debate on Automated Essay Grading
IEEE Intelligent Systems
Inductive Definitions in the system Coq - Rules and Properties
TLCA '93 Proceedings of the International Conference on Typed Lambda Calculi and Applications
The Next Generation of Interactive Theorem Provers
Proceedings of the 7th International Conference on Automated Deduction
Winnowing: local algorithms for document fingerprinting
Proceedings of the 2003 ACM SIGMOD international conference on Management of data
On automated grading of programming assignments in an academic institution
Computers & Education
Testing-Based Automatic Grading: A Proposal from Bloom's Taxonomy
ICALT '08 Proceedings of the 2008 Eighth IEEE International Conference on Advanced Learning Technologies
International Journal of Artificial Intelligence in Education
Bidirectional heuristic search reconsidered
Journal of Artificial Intelligence Research
Isabelle/HOL: a proof assistant for higher-order logic
Isabelle/HOL: a proof assistant for higher-order logic
Trust-based peer assessment for virtual learning systems
SocInfo'10 Proceedings of the Second international conference on Social informatics
A cognitive tutor for geometric proof
ITS'10 Proceedings of the 10th international conference on Intelligent Tutoring Systems - Volume Part II
Enhancing reliability using peer consistency evaluation in human computation
Proceedings of the 2013 conference on Computer supported cooperative work
Codewebs: scalable homework search for massive open online programming courses
Proceedings of the 23rd international conference on World wide web
Hi-index | 0.00 |
Large online courses often assign problems that are easy to grade because they have a fixed set of solutions (such as multiple choice), but grading and guiding students is more difficult in problem domains that have an unbounded number of correct answers. One such domain is derivations: sequences of logical steps commonly used in assignments for technical, mathematical and scientific subjects. We present DeduceIt, a system for creating, grading, and analyzing derivation assignments in any formal domain. DeduceIt supports assignments in any logical formalism, provides students with incremental feedback, and aggregates student paths through each proof to produce instructor analytics. DeduceIt benefits from checking thousands of derivations on the web: it introduces a proof cache, a novel data structure which leverages a crowd of students to decrease the cost of checking derivations and providing real-time, constructive feedback. We evaluate DeduceIt with 990 students in an online compilers course, finding students take advantage of its incremental feedback and instructors benefit from its structured insights into course topics. Our work suggests that automated reasoning can extend online assignments and large-scale education to many new domains.