SEW '02 Proceedings of the 27th Annual NASA Goddard Software Engineering Workshop (SEW-27'02)
Data Mining for Very Busy People
Computer
The business case for automated software engineering
Proceedings of the twenty-second IEEE/ACM international conference on Automated software engineering
Requirements engineering: In search of the dependent variables
Information and Software Technology
Just enough learning (of association rules): the TAR2 "Treatment" learner
Artificial Intelligence Review
Optimizing requirements decisions with keys
Proceedings of the 4th international workshop on Predictor models in software engineering
Finding robust solutions in requirements models
Automated Software Engineering
Today/future importance analysis
Proceedings of the 12th annual conference on Genetic and evolutionary computation
The relationship between search based software engineering and predictive modeling
Proceedings of the 6th International Conference on Predictive Models in Software Engineering
A baseline method for search-based software engineering
Proceedings of the 6th International Conference on Predictive Models in Software Engineering
A study of the bi-objective next release problem
Empirical Software Engineering
Information and Software Technology
Search-based software engineering: Trends, techniques and applications
ACM Computing Surveys (CSUR)
Exact scalable sensitivity analysis for the next release problem
ACM Transactions on Software Engineering and Methodology (TOSEM)
Hi-index | 0.02 |
Planning for the optimal attainment of requirements is an important early lifecycle activity. However, such planning is difficult when dealing with competing requirements, limited resources, and the incompleteness of information available at requirements time.A novel approach to requirements optimization is described. A requirements interaction model is executed to randomly sample the space of options. This produces a large amount of data, which is then condensed by a summarization tool. Theresult isasmall list ofcritical decisions (i.e., those most influential in leading towards the desired optimum). This focuses human experts' attention on a relatively few decisions and makes them aware of major alternatives.This approach is iterative. Each iteration allows experts to select from among the major alternatives. In successive iterations the execution and summarization modules are run again, but each time further constrained by the decisions made in previous iteration. In the case study shown here, out of 99 yes/no decisions (approximately 10 30 possibilities), five iterations were sufficient to find and make the 30 key ones.