Software engineering (6th ed.)
Software engineering (6th ed.)
Usability testing with young children
Proceedings of the 2004 conference on Interaction design and children: building a community
A comparison of think-aloud and post-task interview for usability testing with children
Proceedings of the 2004 conference on Interaction design and children: building a community
Development and evaluation of the problem identification picture cards method
Cognition, Technology and Work
Evaluating Children's Interactive Products: Principles and Practices for Interaction Designers
Evaluating Children's Interactive Products: Principles and Practices for Interaction Designers
Interactive whiteboards in the living room?: asking children about their technologies
BCS-HCI '08 Proceedings of the 22nd British HCI Group Annual Conference on People and Computers: Culture, Creativity, Interaction - Volume 2
Introducing a Pairwise Comparison Scale for UX Evaluations with Preschoolers
INTERACT '09 Proceedings of the 12th IFIP TC 13 International Conference on Human-Computer Interaction: Part II
All work and no play: Measuring fun, usability, and learning in software for children
Computers & Education - Virtual learning? Selected contributions from the CAL 05 symposium
A structured expert evaluation method for the evaluation of children's computer games
INTERACT'05 Proceedings of the 2005 IFIP TC13 international conference on Human-Computer Interaction
The nature of child computer interaction
BCS-HCI '11 Proceedings of the 25th BCS Conference on Human-Computer Interaction
Hi-index | 0.00 |
While many models exist to support the design process of a software development project, the evaluation process is far less well defined and this lack of definition often leads to poorly designed evaluations, or the use of the wrong evaluation method. Evaluations of products for children can be especially complex as they need to consider the different requirements and aims that such a product may have, and often use new or developing evaluation methods. This paper takes the view that evaluations should be planned from the start of a project in order to yield the best results, and proposes a framework to facilitate this. This framework is particularly intended to support the varied and often conflicting requirements of a product designed for children, as defined by the PLU model, but could be adapted for other user groups.