Estimating Software Project Effort Using Analogies
IEEE Transactions on Software Engineering
Applying case-based reasoning: techniques for enterprise systems
Applying case-based reasoning: techniques for enterprise systems
Dynamic Memory: A Theory of Reminding and Learning in Computers and People
Dynamic Memory: A Theory of Reminding and Learning in Computers and People
Software Engineering Economics
Software Engineering Economics
Software Cost Estimation with Cocomo II with Cdrom
Software Cost Estimation with Cocomo II with Cdrom
Safe and Simple Software Cost Analysis
IEEE Software
Getting Results from Search-Based Approaches to Software Engineering
Proceedings of the 26th International Conference on Software Engineering
Simple software cost analysis: safe or unsafe?
PROMISE '05 Proceedings of the 2005 workshop on Predictor models in software engineering
Software project economics: a roadmap
FOSE '07 2007 Future of Software Engineering
Project Data Incorporating Qualitative Factors for Improved Software Defect Prediction
PROMISE '07 Proceedings of the Third International Workshop on Predictor Models in Software Engineering
Nighthawk: a two-level genetic-random unit test data generator
Proceedings of the twenty-second IEEE/ACM international conference on Automated software engineering
The business case for automated software engineering
Proceedings of the twenty-second IEEE/ACM international conference on Automated software engineering
Improving analogy software effort estimation using fuzzy feature subset selection algorithm
Proceedings of the 4th international workshop on Predictor models in software engineering
Optimizing requirements decisions with keys
Proceedings of the 4th international workshop on Predictor models in software engineering
Can we build software faster and better and cheaper?
PROMISE '09 Proceedings of the 5th International Conference on Predictor Models in Software Engineering
How to avoid drastic software process change (using stochastic stability)
ICSE '09 Proceedings of the 31st International Conference on Software Engineering
On the Relative Merits of Software Reuse
ICSP '09 Proceedings of the International Conference on Software Process: Trustworthy Software Development Processes
Accurate estimates without local data?
Software Process: Improvement and Practice - Addressing Management Issues
Understanding the Value of Software Engineering Technologies
ASE '09 Proceedings of the 2009 IEEE/ACM International Conference on Automated Software Engineering
Accurate estimates without calibration?
ICSP'08 Proceedings of the Software process, 2008 international conference on Making globally distributed software development a success story
Customization support for CBR-based defect prediction
Proceedings of the 7th International Conference on Predictive Models in Software Engineering
Beyond data mining; towards "idea engineering"
Proceedings of the 9th International Conference on Predictive Models in Software Engineering
Finding conclusion stability for selecting the best effort predictor in software effort estimation
Automated Software Engineering
Hi-index | 0.00 |
Background: There are many data mining methods but few comparisons between them. For example, there are at least two ways to build quality optimizers, programs that find project options that change quality measures like defects, development effort (total staff hours), and time (elapsed calendar months). In the first way, we construct a parametric model to represent prior software projects. In the second way, we just apply case-based reasoning to reason directly from historical cases. Aim: To assess case-based reasoning vs parametric modeling for quality optimization. Method: We compared the W case-based reasoner against the SEEWAW parametric modeling tool. Results: W is easy to explain and fast to build. It makes no parametric assumptions and hence can be rapidly applied to project data in many formats. SEESAW is an elaborate tool that can only process project data expressed in a particular ontology (i.e. just the COCOMO attributes). It is also slower to execute than W. In 24 different tests comparing W and SEESAW, W always performs at least as well as SEESAW. In 6 of those tests W performed statistically better (all tests used Mann-Whitney, 95% confidence). Lastly, like any CBR method, it comes with a built-in maintenance strategy (just add more cases). Conclusion: The W case-based reasoning tool is recommended over the SEESAW parametric modeling tool for purposes of quality optimization (except in the case where there is no local data).