Assessing the quality of expert judgment: issues and analysis
Decision Support Systems
An Experiment Measuring the Effects of Personal Software Process (PSP) Training
IEEE Transactions on Software Engineering
Empirical Software Engineering
Impact of experience on maintenance skills
Journal of Software Maintenance: Research and Practice
An empirical study of maintenance and development estimation accuracy
Journal of Systems and Software
Current methodological research
ACM '68 Proceedings of the 1968 23rd ACM national conference
METRICS '99 Proceedings of the 6th International Symposium on Software Metrics
A Review of Surveys on Software Effort Estimation
ISESE '03 Proceedings of the 2003 International Symposium on Empirical Software Engineering
Software effort estimation by analogy and "regression toward the mean"
Journal of Systems and Software - Special issue: Best papers on Software Engineering from the SEKE'01 Conference
One Size Does Not Fit All Projects: Exploring Classical Contingency Domains
Management Science
Realism in Assessment of Effort Estimation Uncertainty: It Matters How You Ask
IEEE Transactions on Software Engineering
Group Processes in Software Effort Estimation
Empirical Software Engineering
IEEE Transactions on Software Engineering
Evidence-Based Guidelines for Assessment of Software Development Cost Uncertainty
IEEE Transactions on Software Engineering
A review of studies on expert estimation of software development effort
Journal of Systems and Software
Journal of Systems and Software
Information and Software Technology
Adaptive ridge regression system for software cost estimating on multi-collinear datasets
Journal of Systems and Software
Monetary pricing of software development risks: A method and empirical illustration
Journal of Systems and Software
Software project effort assessment
Journal of Software Maintenance and Evolution: Research and Practice
Software effort estimation as a multiobjective learning problem
ACM Transactions on Software Engineering and Methodology (TOSEM) - Testing, debugging, and error handling, formal methods, lifecycle concerns, evolution and maintenance
Hi-index | 0.00 |
Previous studies report that software developers are over-confident in the accuracy of their effort estimates. Aim: This study investigates the role of outcome feedback, that is, feedback about the discrepancy between the estimated and the actual effort, in improving the uncertainty assessments. Method: We conducted two in-depth empirical studies on uncertainty assessment learning. Study 1 included five student developers and Study 2, 10 software professionals. In each study the developers repeatedly assessed the uncertainty of their effort estimates of a programming task, solved the task, and received estimation accuracy outcome feedback. Results: We found that most, but not all, developers were initially over-confident in the accuracy of their effort estimates and remained over-confident in spite of repeated and timely outcome feedback. One important, but not sufficient, condition for improvement based on outcome feedback seems to be the use of explicitly formulated, instead of purely intuition-based, uncertainty assessment strategies.