Computer analysis of user interfaces based on repetition in transcripts of user sessions
ACM Transactions on Information Systems (TOIS)
Developing user interfaces: ensuring usability through product & process
Developing user interfaces: ensuring usability through product & process
interactions
Remote evaluation: the network as an extension of the usability laboratory
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Video artifacts for design: bridging the Gap between abstraction and detail
DIS '00 Proceedings of the 3rd conference on Designing interactive systems: processes, practices, methods, and techniques
The human-computer interaction handbook
What happened to remote usability testing?: an empirical study of three methods
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Undo and erase events as indicators of usability problems
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Let your users do the testing: a comparison of three remote asynchronous usability testing methods
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
A user-tracing architecture for modeling interaction with the world wide web
Proceedings of the Working Conference on Advanced Visual Interfaces
A Framework for Remote User Evaluation of Accessibility and Usability of Websites
UAHCI '09 Proceedings of the 5th International Conference on Universal Access in Human-Computer Interaction. Addressing Diversity. Part I: Held as Part of HCI International 2009
Enhancing Wikipedia Editing with WAI-ARIA
USAB '09 Proceedings of the 5th Symposium of the Workgroup Human-Computer Interaction and Usability Engineering of the Austrian Computer Society on HCI and Usability for e-Inclusion
A three-level approach for analyzing user behavior in ongoing relationships
HCI'07 Proceedings of the 12th international conference on Human-computer interaction: applications and services
International remote usability evaluation: the bliss of not being there
UI-HCII'07 Proceedings of the 2nd international conference on Usability and internationalization
In situ evaluation of recommender systems: Framework and instrumentation
International Journal of Human-Computer Studies
Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries
Me hates this: exploring different levels of user feedback for (usability) bug reporting
CHI '11 Extended Abstracts on Human Factors in Computing Systems
The effect of task assignments and instruction types on remote asynchronous usability testing
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Backtracking Events as Indicators of Usability Problems in Creation-Oriented Applications
ACM Transactions on Computer-Human Interaction (TOCHI)
Participatory usability: supporting proactive users
CHINZ '03 Proceedings of the 4th Annual Conference of the ACM Special Interest Group on Computer-Human Interaction
Hi-index | 0.00 |
Although existing lab-based formative evaluation is frequently and effectively applied to improving usability of software user interfaces, it has limitations that have led to the concept of remote usability evaluation. Perhaps the most significant impetus for remote usability evaluation methods is the need for a project team to continue formative evaluation downstream, after deployment.The usual kinds of alpha and beta testing do not qualify as formative usability evaluation because they do not yield detailed data observed during usage and associated closely with specific task performance. Critical incident identification is arguably the single most important source of this kind of data. Consequently, we developed and evaluated a cost-effective remote usability evaluation method, based on real users self-reporting critical incidents encountered in real tasks performed in their normal working environments. Results show that users with only brief training can identify, report, and rate the severity level of their own critical incidents.