Robust facial expression recognition of a speaker using thermal image processing and updating of fundamental training data

  • Authors:
  • Yuu Nakanishi;Yasunari Yoshitomi;Taro Asada;Masayoshi Tabuse

  • Affiliations:
  • Graduate School of Life and Environmental Sciences, Kyoto Prefectural University, Kyoto, Japan 606-8522;Graduate School of Life and Environmental Sciences, Kyoto Prefectural University, Kyoto, Japan 606-8522;Graduate School of Life and Environmental Sciences, Kyoto Prefectural University, Kyoto, Japan 606-8522;Graduate School of Life and Environmental Sciences, Kyoto Prefectural University, Kyoto, Japan 606-8522

  • Venue:
  • Artificial Life and Robotics
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

We previously developed a method for the facial expression recognition of a speaker. For facial expression recognition, we selected three static images at the timing positions of just before speaking and while speaking the phonemes of the first and last vowels. Then, only the static image of the front-view face was used for facial expression recognition. However, frequent updates of the training data were time-consuming. To reduce the time for updates, we found that the classifications of "neutral", "happy", and "others" were efficient and accurate for facial expression recognition. Using the proposed method with updated training data of "happy" and "neutral" after an interval such as approximately three and a half years, the facial expressions of two subjects were discriminable with 87.0 % accuracy for the facial expressions of "happy", "neutral", and "others" when exhibiting the intentional facial expressions of "angry", "happy", "neutral", "sad", and "surprised".