Type-2 fuzzy Gaussian mixture models

  • Authors:
  • Jia Zeng;Lei Xie;Zhi-Qiang Liu

  • Affiliations:
  • Department of Electronic Engineering, City University of Hong Kong, Hong Kong;School of Computer Science, Northwestern Polytechnical University, Xi'an, PR China;School of Creative Media, City University of Hong Kong, Hong Kong

  • Venue:
  • Pattern Recognition
  • Year:
  • 2008

Quantified Score

Hi-index 0.01

Visualization

Abstract

This paper presents a new extension of Gaussian mixture models (GMMs) based on type-2 fuzzy sets (T2 FSs) referred to as T2 FGMMs. The estimated parameters of the GMM may not accurately reflect the underlying distributions of the observations because of insufficient and noisy data in real-world problems. By three-dimensional membership functions of T2 FSs, T2 FGMMs use footprint of uncertainty (FOU) as well as interval secondary membership functions to handle GMMs uncertain mean vector or uncertain covariance matrix, and thus GMMs parameters vary anywhere in an interval with uniform possibilities. As a result, the likelihood of the T2 FGMM becomes an interval rather than a precise real number to account for GMMs uncertainty. These interval likelihoods are then processed by the generalized linear model (GLM) for classification decision-making. In this paper we focus on the role of the FOU in pattern classification. Multi-category classification on different data sets from UCI repository shows that T2 FGMMs are consistently as good as or better than GMMs in case of insufficient training data, and are also insensitive to different areas of the FOU. Based on T2 FGMMs, we extend hidden Markov models (HMMs) to type-2 fuzzy HMMs (T2 FHMMs). Phoneme classification in the babble noise shows that T2 FHMMs outperform classical HMMs in terms of the robustness and classification rate. We also find that the larger area of the FOU in T2 FHMMs with uncertain mean vectors performs better in classification when the signal-to-noise ratio is lower.