Speeding up K-Means Algorithm by GPUs

  • Authors:
  • You Li;Kaiyong Zhao;Xiaowen Chu;Jiming Liu

  • Affiliations:
  • -;-;-;-

  • Venue:
  • CIT '10 Proceedings of the 2010 10th IEEE International Conference on Computer and Information Technology
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Cluster analysis plays a critical role in a wide variety of applications, but it is now facing the computational challenge due to the continuously increasing data volume. Parallel computing is one of the most promising solutions to overcoming the computational challenge. In this paper, we target at parallelizing k-Means, which is one of the most popular clustering algorithms, by using the widely available Graphics Processing Units (GPUs). Different from existing GPU-based k-Means algorithms, we observe that data dimension is an important factor that should be taken into consideration when parallelizing k-Means on GPUs. In particular, we use two different strategies for low-dimensional data sets and high-dimensional data sets respectively, in order to make the best use of the power of GPUs. For low-dimensional data sets, we exploit GPU on-chip registers to significantly decrease data access latency. For high-dimensional data sets, we design a novel algorithm which simulates matrix multiplication and exploits GPU on-chip registers and also on-chip shared memory to achieve high compute-to-memory-access ratio. As a result, our GPU-based k-Means algorithm is three to eight times faster than the best reported GPU-based algorithm.