Peopleware: productive projects and teams
Peopleware: productive projects and teams
IEEE Transactions on Software Engineering - Special issue on formal methods in software practice
Wrappers for feature subset selection
Artificial Intelligence - Special issue on relevance
On the Optimality of the Simple Bayesian Classifier under Zero-One Loss
Machine Learning - Special issue on learning with probabilistic representations
Bayesian Analysis of Empirical Software Engineering Cost Models
IEEE Transactions on Software Engineering
Software Engineering Economics
Software Engineering Economics
Software Cost Estimation with Cocomo II with Cdrom
Software Cost Estimation with Cocomo II with Cdrom
Software Verification and Validation: An Overview
IEEE Software
Safe and Simple Software Cost Analysis
IEEE Software
Converging on the Optimal Attainment of Requirements
RE '02 Proceedings of the 10th Anniversary IEEE Joint International Conference on Requirements Engineering
Practical Large Scale What-if Queries: Case Studies with Software Risk Assessment
ASE '00 Proceedings of the 15th IEEE international conference on Automated software engineering
Proceedings of the 17th IEEE international conference on Automated software engineering
Bayesian analysis of software cost and quality models
Bayesian analysis of software cost and quality models
Data Mining for Very Busy People
Computer
Benchmarking Attribute Selection Techniques for Discrete Class Data Mining
IEEE Transactions on Knowledge and Data Engineering
Getting Results from Search-Based Approaches to Software Engineering
Proceedings of the 26th International Conference on Software Engineering
The Cross Entropy Method: A Unified Approach To Combinatorial Optimization, Monte-carlo Simulation (Information Science and Statistics)
Selecting Best Practices for Effort Estimation
IEEE Transactions on Software Engineering
Can we build software faster and better and cheaper?
PROMISE '09 Proceedings of the 5th International Conference on Predictor Models in Software Engineering
How to avoid drastic software process change (using stochastic stability)
ICSE '09 Proceedings of the 31st International Conference on Software Engineering
On the Relative Merits of Software Reuse
ICSP '09 Proceedings of the International Conference on Software Process: Trustworthy Software Development Processes
Accurate estimates without calibration?
ICSP'08 Proceedings of the Software process, 2008 international conference on Making globally distributed software development a success story
Case-based reasoning vs parametric models for software quality optimization
Proceedings of the 6th International Conference on Predictive Models in Software Engineering
A second look at Faster, Better, Cheaper
Innovations in Systems and Software Engineering
Hi-index | 0.00 |
Adoption of advanced automated SE (ASE) tools would be favored if a business case could be made that these tools are more valuable than alternate methods. In theory, software prediction models can be used to make that case. In practice, this is complicated by the "local tuning" problem. Normally, predictors for software effort and defects and threat use local data to tune their predictions. Such local tuning data is often unavailable. This paper shows that assessing the relative merits of different SE methods need not require precise local tunings. STAR1 is a simulated annealer plus a Bayesian post-processor that explores the space of possible local tunings within software prediction models. STAR1 ranks project decisions by their effects on effort and defects and threats. In experiments with two NASA systems, STAR1 found that ASE tools were necessary to minimize effort/ defect/ threats.