DAS - an automated system to support design analysis
ICSE '78 Proceedings of the 3rd international conference on Software engineering
Software metrics and measurement principles
ACM SIGSOFT Software Engineering Notes
On assessing the complexity of software architectures
ISAW '98 Proceedings of the third international workshop on Software architecture
WADAS '90 Proceedings of the seventh Washington Ada symposium on Ada
A project oriented course on software engineering
SIGCSE '83 Proceedings of the fourteenth SIGCSE technical symposium on Computer science education
Computer aided design of software systems
ICSE '79 Proceedings of the 4th international conference on Software engineering
An approach towards reliable software
ICSE '79 Proceedings of the 4th international conference on Software engineering
On the relationships among three software metrics
Proceedings of the 1981 ACM workshop/symposium on Measurement and evaluation of software quality
Capsules: A data abstraction facility for Pascal
ACM '81 Proceedings of the ACM '81 conference
Are current approaches sufficient for measuring software quality?
Proceedings of the software quality assurance workshop on Functional and performance issues
Design issues in client-server software maintenance
ACM SIGSOFT Software Engineering Notes
Measuring the quality of structured designs
Journal of Systems and Software
Software quality metrics based on interconnectivity
Journal of Systems and Software
Hi-index | 0.00 |
It has been recognized that success in producing designs that realize reliable software, even using Structured Design, is intimately dependent on the experience level of the designer. The gap in this methodology is the absence of easily applied quantitative measures of quality that ease the dependence of reliable systems on the rare availability of expert designers. Several metrics have been devised which, when applied to design structure charts, can pinpoint sections of a design that may cause problems during coding, debugging, integration, and modification. These metrics can help provide an independent, unbiased evaluation of design quality. These metrics have been validated against program error data of two recently completed software projects at Hughes. The results indicate that the metrics can provide a predictive measure of program errors experienced during program development. Guidelines for interpreting the design metric values are summarized and a brief description of an interactive structure chart graphics system to simplify metric value calculation is presented.