Situation-Specific Models of Color Differentiation

  • Authors:
  • David R. Flatla;Carl Gutwin

  • Affiliations:
  • University of Saskatchewan;University of Saskatchewan

  • Venue:
  • ACM Transactions on Accessible Computing (TACCESS)
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Color is commonly used to represent categories and values in computer applications, but users with Color-Vision Deficiencies (CVD) often have difficulty differentiating these colors. Recoloring tools have been developed to address the problem, but current recolorers are limited in that they work from a model of only one type of congenital CVD (i.e., dichromatism). This model does not adequately describe many other forms of CVD (e.g., more common congenital deficiencies such as anomalous trichromacy, acquired deficiencies such as cataracts or age-related yellowing of the lens, or temporary deficiencies such as wearing tinted glasses or working in bright sunlight), and so standard recolorers work poorly in many situations. In this article we describe an alternate approach that can address these limitations. The new approach, called Situation-Specific Modeling (SSM), constructs a model of a specific user’s color differentiation abilities in a specific situation, and uses that model as the basis for recoloring digital presentations. As a result, SSM can inherently handle all types of CVD, whether congenital, acquired, or environmental. In this article we describe and evaluate several models that are based on the SSM approach. Our first model of individual color differentiation (called ICD-1) works in RGB color space, and a user study showed it to be accurate and robust (both for users with and without congenital CVD). However, three aspects of ICD-1 were identified as needing improvement: the calibration step needed to build the situation-specific model, and the prediction steps used in recoloring were too slow for real-world use; and the results of the model’s predictions were too coarse for some uses. We therefore developed three further techniques: ICD-2 reduces the time needed to calibrate the model; ICD-3 reduces the time needed to make predictions with the model; and ICD-4 provides additional information about the degree of differentiability in a prediction. Our final result is a model of the user’s color perception that handles any type of CVD, can be calibrated in two minutes, and can find replacement colors in near-real time (~1 second for a 64-color image). The ICD models provide a tool that can greatly improve the perceptibility of digital color for many different types of CVD users, and also demonstrates situation-specific modeling as a new approach that can broaden the applicability of assistive technology.