The nature of statistical learning theory
The nature of statistical learning theory
Boosting a weak learning algorithm by majority
Information and Computation
Effective Data Mining Using Neural Networks
IEEE Transactions on Knowledge and Data Engineering
Machine Learning
Efficient computations for large least square support vector machine classifiers
Pattern Recognition Letters
A Parallel Differential Evolution Algorithm A Parallel Differential Evolution Algorithm
PARELEC '06 Proceedings of the international symposium on Parallel Computing in Electrical Engineering
A reduced and comprehensible polynomial neural network for classification
Pattern Recognition Letters
A performance study of general-purpose applications on graphics processors using CUDA
Journal of Parallel and Distributed Computing
GPU accelerated Monte Carlo simulation of the 2D and 3D Ising model
Journal of Computational Physics
A fusion neural network classifier for image classification
Pattern Recognition Letters
Solving multiclass learning problems via error-correcting output codes
Journal of Artificial Intelligence Research
Accelerating Wavelet Lifting on Graphics Hardware Using CUDA
IEEE Transactions on Parallel and Distributed Systems
First and Second Order SMO Algorithms for LS-SVM Classifiers
Neural Processing Letters
Rough sets for adapting wavelet neural networks as a new classifier system
Applied Intelligence
An Efficient k-Means Algorithm on CUDA
IPDPSW '11 Proceedings of the 2011 IEEE International Symposium on Parallel and Distributed Processing Workshops and PhD Forum
Improvement of neural network classifier using floating centroids
Knowledge and Information Systems
Research of neural network classifier based on FCM and PSO for breast cancer classification
HAIS'12 Proceedings of the 7th international conference on Hybrid Artificial Intelligent Systems - Volume Part I
Parallel multi-swarm optimizer for gene selection in DNA microarrays
Applied Intelligence
Agent-Based approach to RBF network training with floating centroids
ICCCI'12 Proceedings of the 4th international conference on Computational Collective Intelligence: technologies and applications - Volume Part II
Hi-index | 0.00 |
With the advancement in experimental devices and approaches, scientific data can be collected more easily. Some of them are huge in size. The floating centroids method (FCM) has been proven to be a high performance neural network classifier. However, the FCM is difficult to learn from a large data set, which restricts its practical application. In this study, a parallel floating centroids method (PFCM) is proposed to speed up the FCM based on the Compute Unified Device Architecture, especially for a large data set. This method performs all stages as a batch in one block. Blocks and threads are responsible for evaluating classifiers and performing subtasks, respectively. Experimental results indicate that the speed and accuracy are improved by employing this novel approach.