Computer analysis of user interfaces based on repetition in transcripts of user sessions
ACM Transactions on Information Systems (TOIS)
Integrated data capture and analysis tools for research and testing on graphical user interfaces
CHI '92 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
interactions
Remote evaluation: the network as an extension of the usability laboratory
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Empirically validated web page design metrics
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Extracting usability information from user interface events
ACM Computing Surveys (CSUR)
Comparative evaluation of usability tests
CHI '99 Extended Abstracts on Human Factors in Computing Systems
The State of the Art in Automated Usability Evaluation of User
The State of the Art in Automated Usability Evaluation of User
Hi-index | 0.00 |
Remote evaluation methods are a means to detect usability issues without bringing users into a laboratory. This peper presents a brief survey of remote evaluation methods and discusses some benefits and challenges among the methods. The goal of this paper is to evaluate remote evaluation methods according to the point in a design cycle that thay are most useful, the type of data they can yield, and the degree that a method can detect and shed light on usability issues. The most important measure of a method is Its ability to identify usability issues in a given design. When compared to laboratory evaluations, some remote methods show promise in highly correlated task times and task completion rates on the same application. Two basic questions are: what kind of usability issues can you expect to find with various methods and how can you best use each one?