Discovering models of software processes from event-based data
ACM Transactions on Software Engineering and Methodology (TOSEM)
Measuring Process Consistency: Implications for Reducing Software Defects
IEEE Transactions on Software Engineering
Classification and evaluation of defects in a project retrospective
Journal of Systems and Software
Towards a New Approach on Software Process Evolution
AICCSA '01 Proceedings of the ACS/IEEE International Conference on Computer Systems and Applications
Discovering the software process by means of stochastic workflow analysis
Journal of Systems Architecture: the EUROMICRO Journal - Special issue: AGILE methodologies for software production
Conformance checking of processes based on monitoring real behavior
Information Systems
Quantifying process equivalence based on observed behavior
Data & Knowledge Engineering
Running an Agile Software Development Project
Running an Agile Software Development Project
Process mining framework for software processes
ICSP'07 Proceedings of the 2007 international conference on Software process
Focused identification of process model changes
ICSP'07 Proceedings of the 2007 international conference on Software process
ProM 4.0: comprehensive support for real process analysis
ICATPN'07 Proceedings of the 28th international conference on Applications and theory of Petri nets and other models of concurrency
Hi-index | 0.00 |
Background: When teams follow a software development process they do not follow the process consistently. We need a method to measure their fidelity to that process. Objective: To evaluate Rozinat and Aalst's metrics for process conformance to a state based model on noisy data (Rozinat and van der Aalst, 2008). Method: We instructed 14 teams that were developing a software system using Extreme Programming (XP) to record the events of their project (for example writing code, or testing). We calculated the values of the proposed metrics by comparing the data collected to a process model of XP. Results: 13 teams recoded data that we treat as a multiple case study. The fitness metric gave varying results over the teams that corresponded to the number of event types used in the correct order. The appropriateness metrics measured the same values for all teams. Conclusion: The fitness metric is useful for measuring fidelity, but the appropriateness metrics do not measure over fitting well with noisy data. In addition neither metric gave useful information about other aspects like iteration.