GPU-accelerated restricted boltzmann machine for collaborative filtering

  • Authors:
  • Xianggao Cai;Zhanpeng Xu;Guoming Lai;Chengwei Wu;Xiaola Lin

  • Affiliations:
  • School of Information Science and Technology, Sun Yat-sen University, Guangzhou, China;School of Information Science and Technology, Sun Yat-sen University, Guangzhou, China;Department of Computer Application and Technology, Hanshan Normal University, Chaozhou, China;School of Information Science and Technology, Sun Yat-sen University, Guangzhou, China;School of Information Science and Technology, Sun Yat-sen University, Guangzhou, China

  • Venue:
  • ICA3PP'12 Proceedings of the 12th international conference on Algorithms and Architectures for Parallel Processing - Volume Part I
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Collaborative Filtering (CF) is an important technique for recommendation systems which model and analyzes the preferences of customers for giving reasonable advices. Recently, many applications based on Restricted Boltzmann Machine (RBM) have been developed for a large variety of learning problems. RBM-based model for Collaborative Filtering (RBM-CF) is able to deal with large scale data sets and obtains good recommendation performance. However, the computation of RBM becomes problematic when using large number of hidden features to improve the recommendation accuracy. Although RBM has great potential for parallelism, it is still a challenge to develop a parallel implementation of RBM-CF on GPU, since the data sets for CF are always large and sparse. In this paper, we propose a parallel implementation of RBM-CF on GPU using CUDA. We first present how to transform the computation of RBM-CF into matrix-based operation on GPU, and three CUDA kernels for sparse matrix-matrix multiplication to further improve the computational efficiency of RBM-CF for modeling large scale and sparse data sets. Experimental results show that significant speedups are achieved by our parallel implementation on GPU.