Improving automotive safety by pairing driver emotion and car voice emotion

  • Authors:
  • Clifford Nass;Ing-Marie Jonsson;Helen Harris;Ben Reaves;Jack Endo;Scott Brave;Leila Takayama

  • Affiliations:
  • Stanford University, Stanford, CA;Toyota Information Technology Center, Palo Alto, CA;Toyota Information Technology Center, Palo Alto, CA;Toyota Information Technology Center, Palo Alto, CA;Toyota Information Technology Center, Palo Alto, CA;Stanford University, Stanford, CA;Stanford University, Stanford, CA

  • Venue:
  • CHI '05 Extended Abstracts on Human Factors in Computing Systems
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

This study examines whether characteristics of a car voice can affect driver performance and affect. In a 2 (driver emotion: happy or upset) x 2 (car voice emotion: energetic vs. subdued) experimental study, participants (N=40) had emotion induced through watching one of two sets of 5-minute video clips. Participants then spent 20 minutes in a driving simulator where a voice in the car spoke 36 questions (e.g., "How do you think that the car is performing?") and comments ("My favorite part of this drive is the lighthouse.") in either an energetic or subdued voice. Participants were invited to interact with the car voice. When user emotion matched car voice emotion (happy/energetic and upset/subdued), drivers had fewer accidents, attended more to the road (actual and perceived), and spoke more to the car. Implications for car design and voice user interface design are discussed.