Human augmented cognition based on integration of visual and auditory information

  • Authors:
  • Woong Jae Won;Wono Lee;Sang-Woo Ban;Minook Kim;Hyung-Min Park;Minho Lee

  • Affiliations:
  • School of Electrical Engineering and Computer Science, Kyungpook National University, Taegu, Korea;School of Electrical Engineering and Computer Science, Kyungpook National University, Taegu, Korea;Department of Information & Communication Engineering, Dongguk University, Gyeongbuk, Korea;Department of Electronic Engineering, Sogang University, Seoul, Korea;Department of Electronic Engineering, Sogang University, Seoul, Korea;School of Electrical Engineering and Computer Science, Kyungpook National University, Taegu, Korea

  • Venue:
  • PRICAI'10 Proceedings of the 11th Pacific Rim international conference on Trends in artificial intelligence
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we propose a new multiple sensory fused human identification model for providing human augmented cognition. In the proposed model, both facial features and mel-frequency cepstral coefficients (MFCCs) are considered as visual features and auditory features for identifying a human, respectively. As well, an adaboosting model identifies a human using the integrated sensory features of both visual and auditory features. In the proposed model, facial form features are obtained from the principal component analysis (PCA) of a human's face area localized by an Adaboost algorithm in conjunction with a skin color preferable attention model. Moreover, MFCCs are extracted from human speech. Thus, the proposed multiple sensory integration model is aimed to enhance the performance of human identification by considering both visual and auditory complementarily working under partly distorted sensory environments. A human augmented cognition system with the proposed human identification model is implemented as a goggle type, on which it presents information such as unknown people's profile based on human identification. Experimental results show that the proposed model can plausibly conduct human identification in an indoor meeting situation.