Measuring usability: are effectiveness, efficiency, and satisfaction really correlated?
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
The state of the art in automating usability evaluation of user interfaces
ACM Computing Surveys (CSUR)
Usability Engineering
A method to standardize usability metrics into a single score
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Meta-analysis of correlations among usability measures
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
A few notes on the study of beauty in HCI
Human-Computer Interaction
Non-universal usability?: a survey of how usability is understood by Chinese and Danish users
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Comparison of three one-question, post-task usability questionnaires
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Correlations among prototypical usability metrics: evidence for the construct of usability
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Editorial: Modelling user experience - An agenda for research and practice
Interacting with Computers
Measuring effectiveness of HCI integration in software development processes
Journal of Systems and Software
The measurability and predictability of user experience
Proceedings of the 3rd ACM SIGCHI symposium on Engineering interactive computing systems
Testing & quantifying ERP usability
Proceedings of the 1st Annual conference on Research in information technology
A machine learning-based usability evaluation method for eLearning systems
Decision Support Systems
Hi-index | 0.01 |
Master Usability Scaling (MUS) is a measurement method for developing a universal usability continuum based on magnitude estimation and master scaling. The universal usability continuum allows true ratio comparisons, potentially between all items measurable by the construct of usability (attributes, tasks, or products -- software or hardware) that have contributed to the meta-set by following the procedures prescribed. This paper describes the background for MUS, data reduction, and cases studies in software usability assessment.MUS is based on a new measurement method of usability, Usability Magnitude Estimation (UME) [9], where users estimate usability magnitude according to an objective definition of usability. UME allows all items measured within a single usability activity to be compared across one continuum. MUS utilizes UME to assess standard reference tasks across different usability activities to construct one meta-set of data. This meta-set of data can be represented as a universal usability continuum. MUS is simple to administer, easy to comprehend, and with advanced underlying calculations, powerful to use. The MUS continuum has the potential to be a widespread, robust, universal measurement scale of usability.