Effective sounds in complex systems: the ARKOLA simulation
CHI '91 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
INTERCHI '93 Proceedings of the INTERCHI '93 conference on Human factors in computing systems
Using nonspeech sounds to provide navigation cues
ACM Transactions on Computer-Human Interaction (TOCHI)
Nomadic radio: speech and audio interaction for contextual messaging in nomadic environments
ACM Transactions on Computer-Human Interaction (TOCHI) - Special issue on human-computer interaction with mobile systems
Designing the User Interface: Strategies for Effective Human-Computer Interaction
Designing the User Interface: Strategies for Effective Human-Computer Interaction
Presenting Dynamic Information on Mobile Computers
Personal and Ubiquitous Computing
Overcoming the Lack of Screen Space on Mobile Computers
Personal and Ubiquitous Computing
Designing sound for a pervasive mobile game
Proceedings of the 2005 ACM SIGCHI International Conference on Advances in computer entertainment technology
Designing audio and tactile crossmodal icons for mobile devices
Proceedings of the 9th international conference on Multimodal interfaces
Aesthetic and auditory enhancements for multi-stream information sonification
Proceedings of the 3rd international conference on Digital Interactive Media in Entertainment and Arts
An Audio-Haptic Interface Concept Based on Depth Information
HAID '08 Proceedings of the 3rd international workshop on Haptic and Audio Interaction Design
Use of auditory cues for wayfinding assistance in virtual environment: music aids route decision
Proceedings of the 2008 ACM symposium on Virtual reality software and technology
An audio-haptic interface based on auditory depth cues
ICMI '08 Proceedings of the 10th international conference on Multimodal interfaces
Out from behind the curtain: learning from a human auditory display
CHI '09 Extended Abstracts on Human Factors in Computing Systems
Eyes-free overviews for mobile map applications
Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services
Mapping information to audio and tactile icons
Proceedings of the 2009 international conference on Multimodal interfaces
Methods for sound design: a review and implications for research and practice
Proceedings of the 5th Audio Mostly Conference: A Conference on Interaction with Sound
Line following: a path to spatial thinking skills
CHI '11 Extended Abstracts on Human Factors in Computing Systems
The impact of unwanted multimodal notifications
ICMI '11 Proceedings of the 13th international conference on multimodal interfaces
Proceedings of the 29th Annual European Conference on Cognitive Ergonomics
Sound sample detection and numerosity estimation using auditory display
ACM Transactions on Applied Perception (TAP)
Informative sound design in video games
Proceedings of The 9th Australasian Conference on Interactive Entertainment: Matters of Life and Death
Improving communication of visual signals by text-to-speech software
UAHCI'13 Proceedings of the 7th international conference on Universal Access in Human-Computer Interaction: applications and services for quality of life - Volume Part III
NoseTapping: what else can you do with your nose?
Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia
Overview of auditory representations in human-machine interfaces
ACM Computing Surveys (CSUR)
Computational Intelligence and Neuroscience
Hi-index | 0.00 |
Two investigations into the identification of concurrently presented, structured sounds, called earcons were carried out. One of the experiments investigated how varying the number of concurrently presented earcons affected their identification. It was found that varying the number had a significant effect on the proportion of earcons identified. Reducing the number of concurrently presented earcons lead to a general increase in the proportion of presented earcons successfully identified. The second experiment investigated how modifying the earcons and their presentation, using techniques influenced by auditory scene analysis, affected earcon identification. It was found that both modifying the earcons such that each was presented with a unique timbre, and altering their presentation such that there was a 300 ms onset-to-onset time delay between each earcon were found to significantly increase identification. Guidelines were drawn from this work to assist future interface designers when incorporating concurrently presented earcons.