Robust regression and outlier detection
Robust regression and outlier detection
Multilayer feedforward networks are universal approximators
Neural Networks
Introduction to the theory of neural computation
Introduction to the theory of neural computation
Object-oriented metrics that predict maintainability
Journal of Systems and Software - Special issue on object-oriented software
Robust regression for developing software estimation models
Journal of Systems and Software
Journal of Systems and Software
A model for estimating efforts required for developing small-scale business applications
Journal of Systems and Software
Evaluating predictive quality models derived from software measures: lessons learned
Journal of Systems and Software
Software development cost estimation integrating neural network with cluster analysis
Information and Management
How to Improve the Calibration of Cost Models
IEEE Transactions on Software Engineering
Foundations of Neural Networks, Fuzzy Systems, and Knowledge Engineering
Foundations of Neural Networks, Fuzzy Systems, and Knowledge Engineering
Software Engineering Economics
Software Engineering Economics
Human Performance Estimating with Analogy and Regression Models: An Empirical Validation
METRICS '98 Proceedings of the 5th International Symposium on Software Metrics
Fault Prediction Modeling for Software Quality Estimation: Comparing Commonly Used Techniques
Empirical Software Engineering
Analogy-Based Practical Classification Rules for Software Quality Estimation
Empirical Software Engineering
Combining techniques to optimize effort predictions in software project management
Journal of Systems and Software
A Simulation Study of the Model Evaluation Criterion MMRE
IEEE Transactions on Software Engineering
Assessment of a New Three-Group Software Quality Classification Technique: An Empirical Case Study
Empirical Software Engineering
Reliability and Validity in Comparative Studies of Software Prediction Models
IEEE Transactions on Software Engineering
An empirical study of predicting software faults with case-based reasoning
Software Quality Control
Software quality estimation with limited fault data: a semi-supervised learning perspective
Software Quality Control
Reducing biases in individual software effort estimations: a combining approach
Proceedings of the Second ACM-IEEE international symposium on Empirical software engineering and measurement
Modeling the relationship between software effort and size using deming regression
Proceedings of the 6th International Conference on Predictive Models in Software Engineering
An integrated approach to detect fault-prone modules using complexity and text feature metrics
AST/UCMA/ISA/ACN'10 Proceedings of the 2010 international conference on Advances in computer science and information technology
Systematic literature review of machine learning based software development effort estimation models
Information and Software Technology
Prediction of testability using the design metrics for object-oriented software
International Journal of Computer Applications in Technology
Hi-index | 0.00 |
Whilstsome software measurement research has been unquestionably successful,other research has struggled to enable expected advances in projectand process management. Contributing to this lack of advancementhas been the incidence of inappropriate or non-optimal applicationof various model-building procedures. This obviously raises questionsover the validity and reliability of any results obtained aswell as the conclusions that may have been drawn regarding theappropriateness of the techniques in question. In this paperwe investigate the influence of various data set characteristicsand the purpose of analysis on the effectiveness of four model-buildingtechniques—three statistical methods and one neural networkmethod. In order to illustrate the impact of data set characteristics,three separate data sets, drawn from the literature, are usedin this analysis. In terms of predictive accuracy, it is shownthat no one modeling method is best in every case. Some considerationof the characteristics of data sets should therefore occur beforeanalysis begins, so that the most appropriate modeling methodis then used. Moreover, issues other than predictive accuracymay have a significant influence on the selection of model-buildingmethods. These issues are also addressed here and a series ofguidelines for selecting among and implementing these and othermodeling techniques is discussed.