User interface evaluation in the real world: a comparison of four techniques
CHI '91 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
CHI '93 Proceedings of the INTERACT '93 and CHI '93 Conference on Human Factors in Computing Systems
A comparison of usability techniques for evaluating design
DIS '97 Proceedings of the 2nd conference on Designing interactive systems: processes, practices, methods, and techniques
Human-computer interaction (2nd ed.)
Human-computer interaction (2nd ed.)
Supporting flexible roles in a shared space
CSCW '98 Proceedings of the 1998 ACM conference on Computer supported cooperative work
Usability Engineering
A Practical Guide to Usability Testing
A Practical Guide to Usability Testing
Heuristic Evaluation of Groupware Based on the Mechanics of Collaboration
EHCI '01 Proceedings of the 8th IFIP International Conference on Engineering for Human-Computer Interaction
A Review of Groupware Evaluations
WETICE '00 Proceedings of the 9th IEEE International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises
CollabLogger: A Tool for Visualizing Groups at Work
WETICE '00 Proceedings of the 9th IEEE International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises
WETICE '00 Proceedings of the 9th IEEE International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises
Groupware walkthrough: adding context to groupware usability evaluation
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Empirical development of a heuristic evaluation methodology for shared workspace groupware
CSCW '02 Proceedings of the 2002 ACM conference on Computer supported cooperative work
Beyond relative advantage: factors in end-user uptake of computer supported cooperative work
Advanced topics in end user computing
Advanced topics in end user computing
ACM Transactions on Computer-Human Interaction (TOCHI)
A laboratory method for studying activity awareness
Proceedings of the third Nordic conference on Human-computer interaction
Situating evaluation in scenarios of use
CSCW '04 Proceedings of the 2004 ACM conference on Computer supported cooperative work
User-centred design and evaluation of ubiquitous services
Proceedings of the 23rd annual international conference on Design of communication: documenting & designing for pervasive information
"...real, concrete facts about what works...": integrating evaluation and design through patterns
GROUP '05 Proceedings of the 2005 international ACM SIGGROUP conference on Supporting group work
A framework for asynchronous change awareness in collaborative documents and workspaces
International Journal of Human-Computer Studies
Desafios para testes de usuários em sistemas colaborativos - lições de um estudo de caso
IHC '06 Proceedings of VII Brazilian symposium on Human factors in computing systems
Suitable notification intensity: the dynamic awareness system
Proceedings of the 2007 international ACM conference on Supporting group work
Personal and Ubiquitous Computing - Special Issue: User-centred design and evaluation of ubiquitous groupware
Designing and evaluating online communities: research speaks to emerging practice
International Journal of Web Based Communities
Let your users do the testing: a comparison of three remote asynchronous usability testing methods
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Scenario-Based Methods for Evaluating Collaborative Systems
Computer Supported Cooperative Work
Peer activities on Web-learning platforms--Impact on collaborative writing and usability issues
Education and Information Technologies
Remote Hands-On Experience: Distributed Collaboration with Augmented Reality
EC-TEL '09 Proceedings of the 4th European Conference on Technology Enhanced Learning: Learning in the Synergy of Multiple Disciplines
The role of cognitive styles in groupware acceptance
HCI'07 Proceedings of the 12th international conference on Human-computer interaction: applications and services
Do patterns help novice evaluators? A comparative study
International Journal of Human-Computer Studies
Structuring dimensions for collaborative systems evaluation
ACM Computing Surveys (CSUR)
Analytic evaluation of groupware design
CSCWD'05 Proceedings of the 9th international conference on Computer Supported Cooperative Work in Design II
Comparing benchmark task and insight evaluation methods on timeseries graph visualizations
Proceedings of the 3rd BELIV'10 Workshop: BEyond time and errors: novel evaLuation methods for Information Visualization
Determinants of groupware usability for community care collaboration
APWeb'06 Proceedings of the 8th Asia-Pacific Web conference on Frontiers of WWW Research and Development
An initial analysis of communicability evaluation methods through a case study
CHI '12 Extended Abstracts on Human Factors in Computing Systems
Event-driven adaptive collaboration using semantically-enriched patterns
Expert Systems with Applications: An International Journal
A comparison of benchmark task and insight evaluation methods for information visualization
Information Visualization - Special issue on Evaluation for Information Visualization
Assessing the semiotic inspection method: the evaluators' perspective
Proceedings of the 11th Brazilian Symposium on Human Factors in Computing Systems
Critérios para Identificação do Foco de Métodos de Avaliação para Sistemas Colaborativos
Proceedings of the X Brazilian Symposium in Collaborative Systems
Proceedings of the 12th Brazilian Symposium on Human Factors in Computing Systems
Hi-index | 0.00 |
Many researchers believe that groupware can only be evaluated by studying real collaborators in their real contexts, a process that tends to be expensive and time-consuming. Others believe that it is more practical to evaluate groupware through usability inspection methods. Deciding between these two approaches is difficult, because it is unclear how they compare in a real evaluation situation. To address this problem, we carried out a dual evaluation of a groupware system, with one evaluation applying user-based techniques, and the other using inspection methods. We compared the results from the two evaluations and concluded that, while the two methods have their own strengths, weaknesses, and trade-offs, they are complementary. Because the two methods found overlapping problems, we expect that they can be used in tandem to good effect, e.g., applying the discount method prior to a field study, with the expectation that the system deployed in the more expensive field study has a better chance of doing well because some pertinent usability problems will have already been addressed.