Hybrid fuzzy HMM system for Arabic connectionist speech recognition

  • Authors:
  • Sinout D. Shenouda;Fayez W. Zaki;Amr Goneid

  • Affiliations:
  • Computer Science Department, American University in Cairo, Cairo, Egypt;Electronics and Communications Engineering, Mansoura University, Faculty of Engineering, Egypt;Computer Science Department, American University in Cairo, Cairo, Egypt

  • Venue:
  • ISPRA'06 Proceedings of the 5th WSEAS International Conference on Signal Processing, Robotics and Automation
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, a new Arabic connectionist speech recognition system is presented. This recognition system is based on the combination of the fuzzy integral and measure theory [1] and Hidden Markov Model (HMM) [2] using the CSLU toolkit. The CSLU toolkit [3] is a research and development software environment that provides a powerful and flexible tool for research in the field of spoken language understanding. The objective of this paper is to design a hybrid Fuzzy HMM (FHMM) system for Arabic speech recognition. This system is based on a novel Hidden Markov Model with fuzzy logic and fuzzy integral theory. In this context, the fuzzy integral is used to relax the independence assumptions that are necessary with probability functions. Interestingly, it should be noted that one particular case in the choice of fuzzy integral (the Choquet integral), fuzzy measure (probability measure), and fuzzy intersection operator (multiplication), reduces the generalized fuzzy HMM to the classical HMM. The traditional HMM and the proposed Fuzzy HMM systems were implemented by computer simulation and a performance comparison was carried out. It is noticed that, there are some improvements in recognition accuracy in case of the Fuzzy HMM (FHMM) system over the classical HMM recognition system. The FHMM recognition system accuracy varies from 93.36% to 98.36% depending on the data set used whereas the classical HMM' accuracy varies from 91.27% to 94.60% for the same data sets.