A meeting browser evaluation test
CHI '05 Extended Abstracts on Human Factors in Computing Systems
Software or wetware?: discovering when and why people use digital prosthetic memory
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Meeting adjourned: off-line learning interfaces for automatic meeting understanding
Proceedings of the 13th international conference on Intelligent user interfaces
Pro-active meeting assistants: attention please!
AI & Society - Special Issue: Social intelligence design: a junction between engineering and social sciences
Accessing multimodal meeting data: systems, problems and possibilities
MLMI'04 Proceedings of the First international conference on Machine Learning for Multimodal Interaction
Hi-index | 0.00 |
Meeting assistants pose some interesting and unique challenges to the enterprise of software design and evaluation. As the technology reaches greater levels of development, we must begin to consider methods of evaluation that reach beyond regarding meeting browsers as signal replay and information search tools, and begin to assess the dimensions in which meeting assistants and browsers can augment or hinder human cognition and interaction. Some of these dimensions are considered, inasmuch as they were encountered during development of the DARPA CALO Meeting Assistant and Meeting Browser.