Evaluating Software Engineering Technologies
IEEE Transactions on Software Engineering
Evaluation of vendor products: CASE tools as methodology companions
Communications of the ACM
International Journal of Man-Machine Studies
Task-technology fit and individual performance
MIS Quarterly
Understanding user evaluations of information systems
Management Science
User responses to constraints in computerized design tools (extended abstract)
ACM SIGSOFT Software Engineering Notes
Communications of the ACM
International Journal of Human-Computer Studies
An empirical investigation on factors affecting the acceptance of CASE by systems developers
Information and Management
The case for user-centered CASE tools
Communications of the ACM
The realities of software technology payoffs
Communications of the ACM
Empirical Software Engineering
A Study of Strategies for Computerized Critiquing of Programmers
Empirical Software Engineering
Empirical Software Engineering
A Classification of CASE Technology
Computer
How User Perceptions Influence Software Use
IEEE Software
Explaining Software Developer Acceptance of Methodologies: A Comparison of Five Theoretical Models
IEEE Transactions on Software Engineering
The Illusory Diffusion of Innovation: An Examination of Assimilation Gaps
Information Systems Research
Strongly Formative Pilot Studies on Constraints in Early Life-Cycle Work
APSEC '99 Proceedings of the Sixth Asia Pacific Software Engineering Conference
Constraints in CASE Tools: Results from Curiosity Driven Research
ASWEC '01 Proceedings of the 13th Australian Conference on Software Engineering
Hi-index | 0.00 |
This paper reports the results of a controlled experiment undertaken to investigate whether the methodology support offered by a CASE tool does have an impact on the tool's acceptance and actual use by individuals.Subjects used the process modelling tool SPEARMINT to complete a partial process model and remove all inconsistencies. Half the subjects used a variant of SPEARMINT that corrected consistency violations automatically and silently, whilst the other half used a variant of SPEARMINT that told them about inconsistencies both immediately and persistently but without automatic correction. Measurement of acceptance and prediction of actual use was based on the technology acceptance model, supplemented by beliefs about consistency rules. The impact of form of automated consistency assurance applied or hierarchical consistency rules was found to be significant at the 0.05 level with a type I error of 0.027, explaining 71.6% of the variance in CASE tool acceptance. However, intention to use and thus predicted use was of the same size for both variants of SPEARMINT, whereas perceived usefulness and perceived ease of use were affected contrarily.Internal validity of the findings was threatened by validity and reliability issues related to beliefs about consistency rules. Here, further research is needed to develop valid constructs and reliable scales. Following the experiment, a small survey among experienced users of SPEARMINT found that different forms of automated consistency assurance were preferred depending on individual, consistency rule, and task characteristics. Based on these findings, it is recommended that vendors should provide CASE tools with adaptable methodology support, which allow their users to fit automated consistency assurance to the task at hand.