C4.5: programs for machine learning
C4.5: programs for machine learning
Experimental evaluation in computer science: a quantitative study
Journal of Systems and Software
Methodology matters: doing research in the behavioral and social sciences
Human-computer interaction
The IBM 701 Speedcoding System
Journal of the ACM (JACM)
ACM SIGSOFT Software Engineering Notes
A Review of Experimental Investigations into Object-Oriented Technology
Empirical Software Engineering
Writing good software engineering research papers: minitutorial
Proceedings of the 25th International Conference on Software Engineering
Theoretical and Empirical Analysis of ReliefF and RReliefF
Machine Learning
Reviewing 25 Years of Testing Technique Experiments
Empirical Software Engineering
Experimental context classification: incentives and experience of subjects
Proceedings of the 27th international conference on Software engineering
A Survey of Controlled Experiments in Software Engineering
IEEE Transactions on Software Engineering
Instant consistency checking for the UML
Proceedings of the 28th international conference on Software engineering
The Future of Empirical Methods in Software Engineering Research
FOSE '07 2007 Future of Software Engineering
Cogtool-explorer: towards a tool for predicting user interaction
CHI '08 Extended Abstracts on Human Factors in Computing Systems
On the difficulty of replicating human subjects studies in software engineering
Proceedings of the 30th international conference on Software engineering
Empirical studies of agile software development: A systematic review
Information and Software Technology
Using students as subjects - an empirical evaluation
Proceedings of the Second ACM-IEEE international symposium on Empirical software engineering and measurement
Techniques for empirical validationt
Proceedings of the 2006 international conference on Empirical software engineering issues: critical assessment and future directions
Research Methods in Human-Computer Interaction
Research Methods in Human-Computer Interaction
Faith, hope, and love: an essay on software science's neglect of human factors
Proceedings of the ACM international conference on Object oriented programming systems languages and applications
A review of studies on expert estimation of software development effort
Journal of Systems and Software
Cognitive architectures: a way forward for the psychology of programming
Proceedings of the ACM international symposium on New ideas, new paradigms, and reflections on programming and software
2nd international workshop on user evaluations for software engineering researchers (USER 2013)
Proceedings of the 2013 International Conference on Software Engineering
Teaching Secure Coding Practices to STEM Students
Proceedings of the 2013 on InfoSecCD '13: Information Security Curriculum Development Conference
Hi-index | 0.00 |
In this paper, we identify trends about, benefits from, and barriers to performing user evaluations in software engineering research. From a corpus of over 3,000 papers spanning ten years, we report on various subtypes of user evaluations (e.g., coding tasks vs. questionnaires) and relate user evaluations to paper topics (e.g., debugging vs. technology transfer). We identify the external measures of impact, such as best paper awards and citation counts, that are correlated with the presence of user evaluations. We complement this with a survey of over 100 researchers from over 40 different universities and labs in which we identify a set of perceived barriers to performing user evaluations.