The validity of the stimulated retrospective think-aloud method as measured by eye tracking
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Does think aloud work?: how do we know?
CHI '06 Extended Abstracts on Human Factors in Computing Systems
Getting a measure of satisfaction from eyetracking in practice
CHI '06 Extended Abstracts on Human Factors in Computing Systems
What do usability evaluators do in practice?: an explorative study of think-aloud testing
DIS '06 Proceedings of the 6th conference on Designing Interactive systems
Comparative usability evaluation (CUE-4)
Behaviour & Information Technology
Scrutinising usability evaluation: does thinking aloud affect behaviour and mental workload?
Behaviour & Information Technology
Identifying web usability problems from eye-tracking data
BCS-HCI '07 Proceedings of the 21st British HCI Group Annual Conference on People and Computers: HCI...but not as we know it - Volume 1
Cueing retrospective verbal reports in usability testing through eye-movement replay
BCS-HCI '07 Proceedings of the 21st British HCI Group Annual Conference on People and Computers: HCI...but not as we know it - Volume 1
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Dogmas in the assessment of usability evaluation methods
Behaviour & Information Technology
Hi-index | 0.01 |
The retrospective think-aloud method, in which participants work in silence and verbalize their thoughts afterwards while watching a recording of their performance, is often used for the evaluation of websites. However, participants may not always be able to recall what they thought, when they only see few visual cues that help them remembering their task execution process. In our study we complemented the recording of the performance with a gaze trail of the participant" eye movements, in order to elicit more verbalizations. A comparison was made between the traditional retrospective think-aloud protocols and the variant with eye movements. Contrary to our expectations, no differences were found between the two conditions on numbers of problems, the ways these problems were detected, and types of problems. Two possible explanations for this result are that eye movements might be rather confronting and distracting for participants, and the rather generic way of probing we used. The added value might be stronger when specific questions are asked, based on the observed eye movements. Implications for usability practitioners are discussed in the conclusions of this paper.