An evaluation of earcons for use in auditory human-computer interfaces
INTERCHI '93 Proceedings of the INTERCHI '93 conference on Human factors in computing systems
An improved auditory interface for the exploration of lists
MULTIMEDIA '97 Proceedings of the fifth ACM international conference on Multimedia
An interactive method for accessing tables in HTML
Assets '98 Proceedings of the third international ACM conference on Assistive technologies
An automated approach for retrieving hierarchical data from HTML tables
Proceedings of the eighth international conference on Information and knowledge management
A domain specific language framework for non-visual browsing of complex HTML structures
Assets '00 Proceedings of the fourth international ACM conference on Assistive technologies
Improving the accessibility of aurally rendered HTML tables
Proceedings of the fifth international ACM conference on Assistive technologies
Navigation of HTML tables, frames, and XML fragments
Proceedings of the fifth international ACM conference on Assistive technologies
Layout and Language: Preliminary Investigations in Recognizing the Structure of Tables
ICDAR '97 Proceedings of the 4th International Conference on Document Analysis and Recognition
Earcons as a Method of Providing Navigational Cues in a Menu Hierarchy
HCI '96 Proceedings of HCI on People and Computers XI
Navigating Telephone-Based Interfaces with Earcons
HCI 97 Proceedings of HCI on People and Computers XII
Detection, Extraction and Representation of Tables
ICDAR '03 Proceedings of the Seventh International Conference on Document Analysis and Recognition - Volume 1
Towards the creation of accessibility agents for non-visual navigation of the web
CUU '03 Proceedings of the 2003 conference on Universal usability
Rendering tables in audio: the interaction of structure and reading styles
Assets '04 Proceedings of the 6th international ACM SIGACCESS conference on Computers and accessibility
Earcons and icons: their structure and common design principles
Human-Computer Interaction
Diction based prosody modeling in table-to-speech synthesis
TSD'05 Proceedings of the 8th international conference on Text, Speech and Dialogue
Acoustic modeling of dialogue elements for document accessibility
UAHCI'11 Proceedings of the 6th international conference on Universal access in human-computer interaction: applications and services - Volume Part IV
Model-based customizable adaptation of web applications for vocal browsing
Proceedings of the 29th ACM international conference on Design of communication
Setting the table for the blind
Proceedings of the 4th International Conference on PErvasive Technologies Related to Assistive Environments
Hi-index | 0.00 |
Earlier works show that using a prosody specification that is derived from natural human spoken rendition, increases the naturalness and overall acceptance of speech synthesised complex visual structures by conveying to audio certain semantic information hidden in the visual structure. However, prosody alone, although exhibits significant improvement, cannot perform adequately in the cases of very large complex data tables browsed in a linear manner. This work reports on the use of earcons and spearcons combined with prosodically enriched aural rendition of simple and complex tables. Three spoken combinations earcons+prosody , spearcons+prosody , and prosody were evaluated in order to examine how the resulting acoustic output would improve the document-to-audio semantic correlation throughput from the visual modality. The results show that the use of non-speech sounds can further improve certain qualities, such as listening effort, a crucial parameter when vocalising any complex visual structure contained in a document.