Speedup of Implementing Fuzzy Neural Networks With High-Dimensional Inputs Through Parallel Processing on Graphic Processing Units

  • Authors:
  • Chia-Feng Juang; Teng-Chang Chen; Wei-Yuan Cheng

  • Affiliations:
  • Dept. of Electr. Eng., Nat. Chung-Hsing Univ., Taichung, Taiwan;-;-

  • Venue:
  • IEEE Transactions on Fuzzy Systems
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper proposes the implementation of a zero-order Takagi-Sugeno-Kang (TSK)-type fuzzy neural network (FNN) on graphic processing units (GPUs) to reduce training time. The software platform that this study uses is the compute unified device architecture (CUDA). The implemented FNN uses structure and parameter learning in a self-constructing neural fuzzy inference network because of its admirable learning performance. FNN training is conventionally implemented on a single-threaded CPU, where each input variable and fuzzy rule is serially processed. This type of training is time consuming, especially for a high-dimensional FNN that consists of a large number of rules. The GPU is capable of running a large number of threads in parallel. In a GPU-implemented FNN (GPU-FNN), blocks of threads are partitioned according to parallel and independent properties of fuzzy rules. Large sets of input data are mapped to parallel threads in each block. For memory management, this research suitably divides the datasets in the GPU-FNN into smaller chunks according to fuzzy rule structures to share on-chip memory among multiple thread processors. This study applies the GPU-FNN to different problems to verify its efficiency. The results show that to train an FNN with GPU implementation achieves a speedup of more than 30 times that of CPU implementation for problems with high-dimensional attributes.