Software engineering (3rd ed.): a practitioner's approach
Software engineering (3rd ed.): a practitioner's approach
Software Quality: The User's Point of View
Software Quality and Productivity: Theory, practice and training
Early Estimation of Users' Perception of Software Quality
Software Quality Control
The role of controlled experiments in software engineering research
Proceedings of the 2006 international conference on Empirical software engineering issues: critical assessment and future directions
Web services trustworthiness model
ServiceWave'11 Proceedings of the 4th European conference on Towards a service-based internet
Effective data warehouse for information delivery: a literature survey and classification
International Journal of Networking and Virtual Organisations
Hi-index | 0.00 |
This article describes empirical research results regarding the "history effect" in software quality evaluation processes. Most software quality models and evaluation processes models assume that software quality may be deterministically evaluated, especially when it is evaluated by experts. Consequently, software developers focus on the technical characteristics of the software product. A similar assumption is common in most engineering disciplines. However, in regard to other kinds of goods, direct violations of the assumption about objective evaluation were shown to be affected by the consequences of cognitive processes limitations. Ongoing discussion in the area of behavioral economics raises the question: are the experts prone to observation biases? If they are, then software quality models overlook an important aspect of software quality evaluation. This article proposes an experiment that aims to trace the influence of users' knowledge on software quality assessment. Measuring the influence of single variables for the software quality perception process is a complex task. There is no valid quality model for the precise measurement of product quality, and consequently software engineering does not have tools to freely manipulate the quality level for a product. This article proposes a simplified method to manipulate the observed quality level, thereby making it possible to conduct research. The proposed experiment has been conducted among professional software evaluators. The results show the significant negative influence (large effect size) of negative experience of users on final opinion about software quality regardless of its actual level.