Context-aware prompting to transition autonomously through vocational tasks for individuals with cognitive impairments

  • Authors:
  • Yao-Jen Chang;Wan Chih Chang;Tsen-Yung Wang

  • Affiliations:
  • Chung Yuan Christian University, Chung Li, Taiwan Roc;Chung Yuan Christian University, Chung Li, Taiwan Roc;National Yang Ming University, Taipei, Taiwan Roc

  • Venue:
  • Proceedings of the 11th international ACM SIGACCESS conference on Computers and accessibility
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

A challenge to individuals with cognitive impairments in workplaces is how to remain engaged, recall task routines, and transition autonomously across tasks in a way relying on limited cognitive capacity. A novel task prompting system is presented with an aim to increase workplace and life independence for people with traumatic brain injury, cerebral palsy, intellectual disability, schizophrenia, and Down syndromes. This paper describes an approach to providing distributed cognition support of work engagement for persons with cognitive disabilities. The unique strength of the system is the ability to provide unique-to-the-user prompts that are triggered by context. As this population is very sensitive to issues of abstraction (e.g. icons) and presents the designer with the need to tailor prompts to a 'universe-of-one' the use of picture or verbal cues specific to each user and context is implemented. The key to the approach is to spread the context awareness across the system, with the context being flagged by beacon sources and the appropriate response being evoked by displaying the appropriate task prompting cues indexed by the intersection of specific end-user and context ID embedded in the beacons. By separating the context trigger from the pictorial or verbal response, responses can be updated independently of the rest of the installed system, and a single beacon source can trigger multiple responses in the PDA depending on the end-user and their specific tasks. A prototype is built and tested in field experiments involving eight individuals with cognitive impairments. The experimental results show the task load of the human-device interface is low or very low and the capabilities of helping with task engagement are high and reliable.