Statistical analysis with missing data
Statistical analysis with missing data
A quantitative approach to monitoring software development
Software Engineering Journal - Special Section on Z
A philosophy for software measurement
Journal of Systems and Software - An Oregon workshop on software metrics
A mathematical perspective for software measures research
Software Engineering Journal
A Strategy for Improving Safety Related Software Engineering Standards
IEEE Transactions on Software Engineering
A Critique of Software Defect Prediction Models
IEEE Transactions on Software Engineering
Problems and Prospects in Quantifying Software Maintainability
Empirical Software Engineering
Software Measurement: A Necessary Scientific Basis
IEEE Transactions on Software Engineering
Towards a Framework for Software Measurement Validation
IEEE Transactions on Software Engineering
Principles of survey research part 6: data analysis
ACM SIGSOFT Software Engineering Notes
METRICS '01 Proceedings of the 7th International Symposium on Software Metrics
Categorization of Common Coupling and Its Application to the Maintainability of the Linux Kernel
IEEE Transactions on Software Engineering
A Probabilistic Model for Predicting Software Development Effort
IEEE Transactions on Software Engineering
Automated summative usability studies: an empirical evaluation
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Practical Statistics for Medical Research
Practical Statistics for Medical Research
Benchmarking quality measurement
Software Quality Control
Tests for consistent measurement of external subjective software quality attributes
Empirical Software Engineering
ACM SIGSOFT Software Engineering Notes
Hi-index | 0.00 |
Most external software quality attributes are conceptually subjective. For example, maintainability is an external software quality attribute, and it is subjective because interpersonally agreed definitions for the attribute include the phrase `the ease with which maintenance tasks can be performed'. Subjectivity clearly makes measurement of the attributes and validation of prediction systems for the attributes problematic. In fact, in spite of the definitions, few statistically valid attempts at determining the predictive capability of prediction systems for external quality attributes have been published. When validations have been attempted, one approach used is to ask experts to indicate if the values provided by the prediction system informally agree with the experts' intuition. These attempts are undertaken without determining, independently of the prediction system, whether the experts are capable of direct consistent measurement of the attribute. Hence, a statistically valid and unbiased estimate of the predictive capability of the prediction system cannot be obtained (because the experts' measurement process is not independent of the prediction system's values). In this paper, it is argued that the problem of subjective measurement of quality attributes should not be ignored if quality is to be introduced into software in a controlled way. Further, it is argued that direct measurement of quality attributes should be encouraged and that in fact such measurement can be quantified to establish consistency using an existing approach. However, the approach needs to be made more accessible to promote its use. In so doing, it would be possible to decide whether consistent independent estimates of the true values of software quality attributes can be assigned and prediction systems for quality attributes developed.