UIST '00 Proceedings of the 13th annual ACM symposium on User interface software and technology
DateLens: A fisheye calendar interface for PDAs
ACM Transactions on Computer-Human Interaction (TOCHI)
Toolkit Design for Interactive Structured Graphics
IEEE Transactions on Software Engineering
The prospects for unrestricted speech input for TV content search
Proceedings of the working conference on Advanced visual interfaces
Measurement combination for acoustic source localization in a room environment
EURASIP Journal on Audio, Speech, and Music Processing - Intelligent Audio, Speech, and Music Processing Applications
Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services
Multimodal dialogue systems: a case study for interactive TV
ERCIM'02 Proceedings of the User interfaces for all 7th international conference on Universal access: theoretical perspectives, practice, and experience
Hi-index | 0.00 |
We present a multimodal media center user interface with a hands-free speech recognition input method for users with physical disabilities. In addition to speech input, the application features a zoomable context + focus graphical user interface and several other modalities, including speech output, haptic feedback, and gesture input. These features have been developed in co-operation with representatives from the target user groups. In this article, we focus on the speech input interface and its evaluations. We discuss the user interface design and results from a long-term pilot study taking place in homes of physically disabled users, and compare the results to a public pilot study and laboratory studies carried out with non-disabled users.