A visual recipe book for persons with language impairments
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Participatory design with proxies: developing a desktop-PDA system to support people with aphasia
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
The design and field evaluation of PhotoTalk: a digital image communication application for people
Proceedings of the 9th international ACM SIGACCESS conference on Computers and accessibility
Proceedings of the 11th international ACM SIGACCESS conference on Computers and accessibility
Vocabulary navigation made easier
Proceedings of the 15th international conference on Intelligent user interfaces
Designing a free style, indirect, and interactive storytelling application for people with aphasia
INTERACT'07 Proceedings of the 11th IFIP TC 13 international conference on Human-computer interaction
Hi-index | 0.00 |
Navigating a vocabulary consisting of thousands of entries in order to select appropriate words for building communication is challenging for individuals with lexical access impairments like those caused by aphasia. Ineffective vocabulary organization and navigation hurt the usability and adoption of assistive communication tools and ultimately fail to help users engage in practical communication. We have developed a multi-modal visual vocabulary that enables improved navigation and effective word finding by modeling a speaker's "mental lexicon", where words are stored and organized in ways that allow efficient access and retrieval. Due to impaired links in their mental lexicon, people with aphasia have persistent difficulties accessing and retrieving words that express intended concepts. The Visual Vocabulary for Aphasia (ViVA) attempts to compensate for some of these missing or impaired semantic connections by organizing words in a dynamic semantic network where links between words reflect word association measures based on WordNet, human judgments of semantic similarity, and past vocabulary usage.