A multi-modal mouse with tactile and force feedback
International Journal of Human-Computer Studies
Putting the feel in ’look and feel‘
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Journal of Intelligent and Robotic Systems
Multiple haptic targets for motion-impaired computer users
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
HAPTICS '02 Proceedings of the 10th Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems
TorqueBAR: an ungrounded haptic feedback device
Proceedings of the 5th international conference on Multimodal interfaces
Tilt to Scroll: Evaluating a Motion Based Vibrotactile Mobile Interface
WHC '05 Proceedings of the First Joint Eurohaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems
Effectiveness of directional vibrotactile cuing on a building-clearing task
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Raising Awareness about Space via Vibro-Tactile Notifications
EuroSSC '08 Proceedings of the 3rd European Conference on Smart Sensing and Context
Managing workload in human-robot interaction: A review of empirical studies
Computers in Human Behavior
Multimodal interaction: A suitable strategy for including older users?
Interacting with Computers
Haptic addition to a visual menu selection interface controlled by an in-vehicle rotary device
Advances in Human-Computer Interaction
Assessing a multimodal user interface in a target acquisition task
BCS-HCI '12 Proceedings of the 26th Annual BCS Interaction Specialist Group Conference on People and Computers
Hi-index | 0.00 |
Information display systems have become increasingly complex and more difficult for human cognition to process effectively. Based upon Wicken's Multiple Resource Theory (MRT), information delivered using multiple modalities (i.e., visual and tactile) could be more effective than communicating the same information through a single modality. The purpose of this meta-analysis is to compare user effectiveness when using visual-tactile task feedback (a multimodality) to using only visual task feedback (a single modality). Results indicate that using visual-tactile feedback enhances task effectiveness more so than visual feedback (g = .38). When assessing different criteria, visual-tactile feedback is particularly effective at reducing reaction time (g = .631) and increasing performance (g = .618). Follow up moderator analyses indicate that visual-tactile feedback is more effective when workload is high (g = .844) and multiple tasks are being performed (g = .767). Implications of results are discussed in the paper.