A model for synthesizing a combined verbal and nonverbal behavior based on personality traits in human-robot interaction

  • Authors:
  • Amir Aly;Adriana Tapus

  • Affiliations:
  • ENSTA-ParisTech, Palaiseau, France;ENSTA-ParisTech, Palaiseau, France

  • Venue:
  • Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Robots are more and more present in our daily life; they have to move into human-centered environments, to interact with humans, and to obey some social rules so as to produce an appropriate social behavior in accordance with human's profile (i.e., personality, state of mood, and preferences). Recent researches discussed the effect of personality traits on the verbal and nonverbal production, which plays a major role in transferring and understanding messages in a social interaction between a human and a robot. The characteristics of the generated gestures (e.g., amplitude, direction, rate, and speed) during the nonverbal communication can differ according to the personality trait, which, similarly, influences the verbal content of the human speech in terms of verbosity, repetitions, etc. Therefore, our research tries to map a human's verbal behavior to a corresponding combined robot's verbal-nonverbal behavior based on the personality dimensions of the interacting human. The system estimates first the interacting human's personality traits through a psycholinguistic analysis of the spoken language, then it uses PERSONAGE natural language generator that tries to generate a corresponding verbal language to the estimated personality traits. Gestures are generated by using BEAT toolkit, which performs a linguistic and contextual analysis of the generated language relying on rules derived from extensive research into human conversational behavior. We explored the human-robot personality matching aspect and the differences of the adapted mixed robot's behavior (gesture and speech) over the adapted speech only robot's behavior in an interaction. Our model validated that individuals preferred more to interact with a robot that had the same personality with theirs and that an adapted mixed robot's behavior (gesture and speech) was more engaging and effective than a speech only robot's behavior. Our experiments were done with Nao robot.