Adaptation of a cash dispenser to the needs of blind and visually impaired people
Assets '98 Proceedings of the third international ACM conference on Assistive technologies
Non-keyboard QWERTY touch typing: a portable input interface for the mobile user
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
An architecture for pen-based interaction on electronic whiteboards
AVI '00 Proceedings of the working conference on Advanced visual interfaces
Nomadic radio: speech and audio interaction for contextual messaging in nomadic environments
ACM Transactions on Computer-Human Interaction (TOCHI) - Special issue on human-computer interaction with mobile systems
Gestural and audio metaphors as a means of control for mobile devices
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Development of a Wearable Computer Orientation System
Personal and Ubiquitous Computing
Overcoming the Lack of Screen Space on Mobile Computers
Personal and Ubiquitous Computing
Older adults' evaluations of speech output
Proceedings of the fifth international ACM conference on Assistive technologies
Voice over Workplace (VoWP): voice navigation in a complex business GUI
Proceedings of the fifth international ACM conference on Assistive technologies
Multimodal 'eyes-free' interaction techniques for wearable devices
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Mobile ADVICE: an accessible device for visually impaired capability enhancement
CHI '03 Extended Abstracts on Human Factors in Computing Systems
Pervasive Computing in Emergency Situations
HICSS '04 Proceedings of the Proceedings of the 37th Annual Hawaii International Conference on System Sciences (HICSS'04) - Track 3 - Volume 3
TiltText: using tilt for text input to mobile phones
Proceedings of the 16th annual ACM symposium on User interface software and technology
Easing the wait in the emergency room: building a theory of public information systems
DIS '04 Proceedings of the 5th conference on Designing interactive systems: processes, practices, methods, and techniques
Cognitive properties of a whiteboard: a case study in a trauma centre
ECSCW'01 Proceedings of the seventh conference on European Conference on Computer Supported Cooperative Work
Proceedings of the 10th international ACM SIGACCESS conference on Computers and accessibility
Journal of Visual Languages and Computing
Multimodal interaction in a ubiquitous environment
INTERACT'07 Proceedings of the 11th IFIP TC 13 international conference on Human-computer interaction - Volume Part II
Leveraging proprioception to make mobile phones more accessible to users with visual impairments
Proceedings of the 12th international ACM SIGACCESS conference on Computers and accessibility
A study of unidirectional swipe gestures on in-vehicle touch screens
Proceedings of the 5th International Conference on Automotive User Interfaces and Interactive Vehicular Applications
Hi-index | 0.00 |
We have developed a gesture input system that provides a common interaction technique across mobile, wearable and ubiquitous computing devices of diverse form factors. In this paper, we combine our gestural input technique with speech output and test whether or not the absence of a visual display impairs usability in this kind of multimodal interaction. This is of particular relevance to mobile, wearable and ubiquitous systems where visual displays may be restricted or unavailable. We conducted the evaluation using a prototype for a system combining gesture input and speech output to provide information to patients in a hospital Accident and Emergency Department. A group of participants was instructed to access various services using gestural inputs. The services were delivered by automated speech output. Throughout their tasks, these participants could see a visual display on which a GUI presented the available services and their corresponding gestures. Another group of participants performed the same tasks but without this visual display. It was predicted that the participants without the visual display would make more incorrect gestures and take longer to perform correct gestures than the participants with the visual display. We found no significant difference in the number of incorrect gestures made. We also found that participants with the visual display took longer than participants without it. It was suggested that for a small set of semantically distinct services with memorable and distinct gestures, the absence of a GUI visual display does not impair the usability of a system with gesture input and speech output.