Program design by informal English descriptions
Communications of the ACM
IEEE Software
Effects of defects in UML models: an experimental investigation
Proceedings of the 28th international conference on Software engineering
The Materiality of the Internet
IEEE Annals of the History of Computing
Guiding the Application of Design Patterns Based on UML Models
ICSM '06 Proceedings of the 22nd IEEE International Conference on Software Maintenance
Evaluating Quality in Model-Driven Engineering
MISE '07 Proceedings of the International Workshop on Modeling in Software Engineering
Thematic Role Based Generation of UML Models from Real World Requirements
ICSC '07 Proceedings of the International Conference on Semantic Computing
On the application of software metrics to UML models
MoDELS'06 Proceedings of the 2006 international conference on Models in software engineering
An experimental investigation of UML modeling conventions
MoDELS'06 Proceedings of the 9th international conference on Model Driven Engineering Languages and Systems
GrGen: a fast SPO-based graph rewriting tool
ICGT'06 Proceedings of the Third international conference on Graph Transformations
Hi-index | 0.00 |
Assessing numerous models from students in written exams or homework is an exhausting task. We present an approach for a fair and transparent assessment of the completeness of models according to a natural language domain description. The assessment is based on checklists generated by the tool Sumo χ. Sumo χ directly works on an annotated version of the original exam text, so no ‘gold standard’ is needed.