A comparison of usage evaluation and inspection methods for assessing groupware usability
GROUP '01 Proceedings of the 2001 International ACM SIGGROUP Conference on Supporting Group Work
WETICE '00 Proceedings of the 9th IEEE International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises
Heuristic evaluation: Comparing ways of finding and reporting usability problems
Interacting with Computers
The semiotic inspection method
IHC '06 Proceedings of VII Brazilian symposium on Human factors in computing systems
Avaliação da manas na identificação de problemas de impacto social: um estudo de caso
Proceedings of the VIII Brazilian Symposium on Human Factors in Computing Systems
Can inspection methods generate valid new knowledge in HCI? The case of semiotic inspection
International Journal of Human-Computer Studies
Investigating the Applicability of the Semiotic Inspection Method to Collaborative Systems
SBSC '09 Proceedings of the 2009 Simpósio Brasileiro de Sistemas Colaborativos
Dogmas in the assessment of usability evaluation methods
Behaviour & Information Technology
Semiotic Engineering Methods for Scientific Research in HCI
Semiotic Engineering Methods for Scientific Research in HCI
Do patterns help novice evaluators? A comparative study
International Journal of Human-Computer Studies
Structuring dimensions for collaborative systems evaluation
ACM Computing Surveys (CSUR)
An initial analysis of communicability evaluation methods through a case study
CHI '12 Extended Abstracts on Human Factors in Computing Systems
Applicability of the semiotic inspection method: a systematic literature review
Proceedings of the 10th Brazilian Symposium on on Human Factors in Computing Systems and the 5th Latin American Conference on Human-Computer Interaction
Hi-index | 0.00 |
Deciding what to evaluate in collaborative systems and how to do so is still a challenge. Many different evaluation methods have been proposed or extended for collaborative systems, and most of them have not been consolidated. Furthermore, comparing existing methods in order to choose the most suitable one for an evaluation is not a simple task. In this paper we propose a set of qualitative criteria that aim at identifying aspects a method focuses on. The goal is to support evaluators in deciding what method would be more interesting for a given context. In order to assess the set of criteria a case study in which two different collaborative systems were evaluated with three different Semiotic Engineering based evaluation methods was conducted. The study showed that the set of criteria was useful in indicating which aspects each method generated more information about.