Guidelines for usability testing with children
interactions
Hedonic and ergonomic quality aspects determine a software's appeal
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Assessing usability and fun in educational software
Proceedings of the 2005 conference on Interaction design and children
Using the fun toolkit and other survey methods to gather opinions in child computer interaction
Proceedings of the 2006 conference on Interaction design and children
SCORPIODROME: an exploration in mixed reality social gaming for children
Proceedings of the 2005 ACM SIGCHI International Conference on Advances in computer entertainment technology
Development and evaluation of the problem identification picture cards method
Cognition, Technology and Work
Validating the Fun Toolkit: an instrument for measuring children’s opinions of technology
Cognition, Technology and Work
Introducing a Pairwise Comparison Scale for UX Evaluations with Preschoolers
INTERACT '09 Proceedings of the 12th IFIP TC 13 International Conference on Human-Computer Interaction: Part II
Proceedings of the 9th International Conference on Interaction Design and Children
Evaluating user experience of autistic children through video observation
CHI '13 Extended Abstracts on Human Factors in Computing Systems
Investigating the fidelity effect when evaluating game prototypes with children
BCS-HCI '13 Proceedings of the 27th International BCS Human Computer Interaction Conference
Hi-index | 0.00 |
In interaction design, there are several metrics used to gather user experience data. A common approach is to use surveys with the usual method being to ask users after they have experienced a product as to their opinion and satisfaction. This paper describes the use of the Smileyometer (a product from the Fun Toolkit) to test for user experience with children by asking for opinions in relation to expected as well as experienced fun. Two studies looked at the ratings that children, from two different age groups and in two different contexts, gave to a set of varied age-appropriate interactive technology installations. The ratings given before use (expectations) are compared with ratings given after use (experience) across the age groups and across installations. The studies show that different ratings were given for the different installations and that there were age-related differences in the use of the Smileyometer to rate user experience; these firstly evidence that children can, and do, discriminate between different experiences and that children do reflect on user experience after using technologies. In most cases, across both age groups, children expected a lot from the technologies and their after use (experienced) rating confirmed that this was what they had got. The paper concludes by considering the implications of the collective findings for the design and evaluation of technologies with children