What can we teach about human-computer interaction? (plenary address)
CHI '90 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Teaching software development in a studio environment
SIGCSE '91 Proceedings of the twenty-second SIGCSE technical symposium on Computer science education
Enhancing the explanatory power of usability heuristics
CHI '94 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
SIGCSE '95 Proceedings of the twenty-sixth SIGCSE technical symposium on Computer science education
Social, individual and technological issues for groupware calendar systems
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Distance learning through distributed collaborative video viewing
CSCW '00 Proceedings of the 2000 ACM conference on Computer supported cooperative work
Iterative User-Interface Design
Computer
The Debate on Automated Essay Grading
IEEE Intelligent Systems
Design-oriented human-computer interaction
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Understanding experience in interactive systems
DIS '04 Proceedings of the 5th conference on Designing interactive systems: processes, practices, methods, and techniques
Peer assessment in the algorithms course
ITiCSE '05 Proceedings of the 10th annual SIGCSE conference on Innovation and technology in computer science education
The Wisdom of Crowds
Getting the right design and the design right
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
How bodies matter: five themes for interaction design
DIS '06 Proceedings of the 6th conference on Designing Interactive systems
Scaffolding knowledge integration through designing multimedia case studies of engineering design
FIE '95 Proceedings of the Frontiers in Education Conference, on 1995. Proceedings., 1995 - Volume 02
Sketching User Experiences: Getting the Design Right and the Right Design
Sketching User Experiences: Getting the Design Right and the Right Design
Proceedings of the 13th International Conference on Human-Computer Interaction. Part I: New Trends
Quality management on Amazon Mechanical Turk
Proceedings of the ACM SIGKDD Workshop on Human Computation
Parallel prototyping leads to better design results, more divergence, and increased self-efficacy
ACM Transactions on Computer-Human Interaction (TOCHI)
Promoting creativity in the computer science design studio
Proceedings of the 42nd ACM technical symposium on Computer science education
Crowdsourcing translation: professional quality from non-professionals
HLT '11 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1
Shepherding the crowd yields better work
Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work
Crossing the software education chasm
Communications of the ACM
So who won?: dynamic max discovery with the crowd
SIGMOD '12 Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data
Will massive open online courses change how we teach?
Communications of the ACM
Semantic compositionality through recursive matrix-vector spaces
EMNLP-CoNLL '12 Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning
Impression formation in online peer production: activity traces and personal profiles in github
Proceedings of the 2013 conference on Computer supported cooperative work
Enhancing reliability using peer consistency evaluation in human computation
Proceedings of the 2013 conference on Computer supported cooperative work
Proceedings of the 2013 conference on Computer supported cooperative work
Deconstructing disengagement: analyzing learner subpopulations in massive open online courses
Proceedings of the Third International Conference on Learning Analytics and Knowledge
Scaling short-answer grading by combining peer assessment with algorithmic scoring
Proceedings of the first ACM conference on Learning @ scale conference
Self-evaluation in advanced power searching and mapping with google MOOCs
Proceedings of the first ACM conference on Learning @ scale conference
Hi-index | 0.00 |
Peer and self-assessment offer an opportunity to scale both assessment and learning to global classrooms. This article reports our experiences with two iterations of the first large online class to use peer and self-assessment. In this class, peer grades correlated highly with staff-assigned grades. The second iteration had 42.9% of students’ grades within 5% of the staff grade, and 65.5% within 10%. On average, students assessed their work 7% higher than staff did. Students also rated peers’ work from their own country 3.6% higher than those from elsewhere. We performed three experiments to improve grading accuracy. We found that giving students feedback about their grading bias increased subsequent accuracy. We introduce short, customizable feedback snippets that cover common issues with assignments, providing students more qualitative peer feedback. Finally, we introduce a data-driven approach that highlights high-variance items for improvement. We find that rubrics that use a parallel sentence structure, unambiguous wording, and well-specified dimensions have lower variance. After revising rubrics, median grading error decreased from 12.4% to 9.9%.