Accelerating FCM neural network classifier using graphics processing units with CUDA

  • Authors:
  • Lin Wang;Bo Yang;Yuehui Chen;Zhenxiang Chen;Hongwei Sun

  • Affiliations:
  • Shandong Provincial Key Laboratory of Network based Intelligent Computing, University of Jinan, Jinan, China 250022;Shandong Provincial Key Laboratory of Network based Intelligent Computing, University of Jinan, Jinan, China 250022 and School of Informatics, Linyi University, Linyi, China 276000;Shandong Provincial Key Laboratory of Network based Intelligent Computing, University of Jinan, Jinan, China 250022;Shandong Provincial Key Laboratory of Network based Intelligent Computing, University of Jinan, Jinan, China 250022;School of Mathematical Sciences, University of Jinan, Jinan, China 250022

  • Venue:
  • Applied Intelligence
  • Year:
  • 2014

Quantified Score

Hi-index 0.00

Visualization

Abstract

With the advancement in experimental devices and approaches, scientific data can be collected more easily. Some of them are huge in size. The floating centroids method (FCM) has been proven to be a high performance neural network classifier. However, the FCM is difficult to learn from a large data set, which restricts its practical application. In this study, a parallel floating centroids method (PFCM) is proposed to speed up the FCM based on the Compute Unified Device Architecture, especially for a large data set. This method performs all stages as a batch in one block. Blocks and threads are responsible for evaluating classifiers and performing subtasks, respectively. Experimental results indicate that the speed and accuracy are improved by employing this novel approach.