How to get people to say and type what computers can understand
International Journal of Man-Machine Studies
Feedback strategies for error correction in speech recognition systems
International Journal of Man-Machine Studies
Modeling error recovery and repair in automatic speech recognition
International Journal of Man-Machine Studies
Designing SpeechActs: issues in speech user interfaces
CHI '95 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Providing integrated toolkit-level support for ambiguity in recognition-based interfaces
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Taming recognition errors with a multimodal interface
Communications of the ACM
Multimodal error correction for speech user interfaces
ACM Transactions on Computer-Human Interaction (TOCHI)
Contextual Design: Defining Customer-Centered Systems
Contextual Design: Defining Customer-Centered Systems
Cognitive tutors as modeling tools and instructional models
Smart machines in education
IEEE MultiMedia
Coupled hidden Markov models for complex action recognition
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
Active capture: automatic direction for automatic movies
MULTIMEDIA '03 Proceedings of the eleventh ACM international conference on Multimedia
A user-centered drowsy-driver detection and warning system
Proceedings of the 2003 conference on Designing for user experiences
ICME '03 Proceedings of the 2003 International Conference on Multimedia and Expo - Volume 1
Designing systems that direct human action
CHI '05 Extended Abstracts on Human Factors in Computing Systems
Designing mediation for context-aware applications
ACM Transactions on Computer-Human Interaction (TOCHI)
Active capture design case study: SIMS faces
DUX '05 Proceedings of the 2005 conference on Designing for User eXperience
Signal Processing - Special section: Multimodal human-computer interfaces
Presentation sensei: a presentation training system using speech and image processing
Proceedings of the 9th international conference on Multimodal interfaces
Handling uncertainty in multimodal pervasive computing applications
Computer Communications
NudgeCam: toward targeted, higher quality media capture
Proceedings of the international conference on Multimedia
DemoCut: generating concise instructional videos for physical demonstrations
Proceedings of the 26th annual ACM symposium on User interface software and technology
Hi-index | 0.01 |
As human-computer interaction becomes more closely modeled on human-human interaction, new techniques and strategies for human-computer interaction are required. In response to the inevitable shortcomings of recognition technologies, researchers have studied mediation: interaction techniques by which users can resolve system ambiguity and error. In this paper we approach the human-computer dialogue from the other side, examining system-initiated direction and mediation of human action. We conducted contextual interviews with a variety of experts in fields involving human-human direction, including a film director, photographer, golf instructor, and 911 operator. Informed by these interviews and a review of prior work, we present strategies for directing physical human action and an associated design space for systems that perform such direction. We illustrate these concepts with excerpts from our interviews and with our implemented system for automated media capture or "Active Capture," in which an unaided computer system uses techniques identified in our design space to act as a photographer, film director, and cinematographer.