Using benchmarking to advance research: a challenge to software engineering
Proceedings of the 25th International Conference on Software Engineering
Mining Version Histories to Guide Software Changes
Proceedings of the 26th International Conference on Software Engineering
Predicting Source Code Changes by Mining Change History
IEEE Transactions on Software Engineering
ICSM '04 Proceedings of the 20th IEEE International Conference on Software Maintenance
NavTracks: Supporting Navigation in Software Maintenance
ICSM '05 Proceedings of the 21st IEEE International Conference on Software Maintenance
Replaying development history to assess the effectiveness of change propagation tools
Empirical Software Engineering
Using task context to improve programmer productivity
Proceedings of the 14th ACM SIGSOFT international symposium on Foundations of software engineering
Tracking Code Clones in Evolving Software
ICSE '07 Proceedings of the 29th international conference on Software Engineering
The Future of Programming Environments: Integration, Synergy, and Assistance
FOSE '07 2007 Future of Software Engineering
On the difficulty of replicating human subjects studies in software engineering
Proceedings of the 30th international conference on Software engineering
How Program History Can Improve Code Completion
ASE '08 Proceedings of the 2008 23rd IEEE/ACM International Conference on Automated Software Engineering
On recommending meaningful names in source and UML
Proceedings of the 2nd International Workshop on Recommendation Systems for Software Engineering
Documenting and sharing knowledge about code
Proceedings of the 34th International Conference on Software Engineering
Hi-index | 0.00 |
Recommender systems are Integrated Development Environment (IDE) extensions which assist developers in the task of coding. However, since they assist specific aspects of the general activity of programming, their impact is hard to assess. In previous work, we used with success an evaluation strategy using automated benchmarks to automatically and precisely evaluate several recommender systems, based on recording and replaying developer interactions. In this paper, we highlight the challenges we expect to encounter while applying this approach to other recommender systems.