interactions
Remote evaluation: the network as an extension of the usability laboratory
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Extracting usability information from user interface events
ACM Computing Surveys (CSUR)
In the lab and out in the wild: remote web usability testing for mobile devices
CHI '02 Extended Abstracts on Human Factors in Computing Systems
Adaptation of Traditional Usability Testing Methods for Remote Testing
HICSS '01 Proceedings of the 34th Annual Hawaii International Conference on System Sciences ( HICSS-34)-Volume 5 - Volume 5
A comparison of synchronous remote and local usability studies for an expert interface
CHI '04 Extended Abstracts on Human Factors in Computing Systems
Experience clip: method for user participation and evaluation of mobile concepts
PDC 04 Proceedings of the eighth conference on Participatory design: Artful integration: interweaving media, materials and practices - Volume 1
Here, there, anywhere: remote usability testing that works
CITC5 '04 Proceedings of the 5th conference on Information technology education
It's a jungle out there: practical considerations for evaluation in the city
CHI '05 Extended Abstracts on Human Factors in Computing Systems
Emerging research methods for understanding mobile technology use
OZCHI '05 Proceedings of the 17th Australia conference on Computer-Human Interaction: Citizens Online: Considerations for Today and the Future
Collecter les usages réels des clients de téléphonie mobile (un outil embarqué)
IHM 2005 Proceedings of the 17th international conference on Francophone sur l'Interaction Homme-Machine
Hi-index | 0.00 |
This article focuses on usability testing of mobile devices "in the wild". We are interested in ensuring the methodological validity of these analyses. We so implemented an original approach which consists in carrying out a meta-evaluation of two evaluations of the same quasi-realistic experiment. The first one uses a traditional methodology similar to usability laboratory settings. The second one mimics the conditions of experiments carried out "in the wild". Our objective is to validate -in advance-the methodology for usability evaluations "in the wild".