Poster: fast GPU read alignment with burrows wheeler transform based index

  • Authors:
  • Aleksandr Drozd;Naoya Maruyama;Satoshi Matsuoka

  • Affiliations:
  • Tokyo Institute of Technology, Tokyo, Japan;Tokyo Institute of Technology, Tokyo, Japan;Tokyo Institute of Technology, Tokyo, Japan

  • Venue:
  • Proceedings of the 2011 companion on High Performance Computing Networking, Storage and Analysis Companion
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

We address the problem of performing faster read alignment on GPU devices. The task of DNA sequence processing is extremely computationally intensive as constant progress in sequencing technology leads to ever-increasing amounts of sequence data[6]. One of possible solutions for this problem is to use the extreme parallel capacities of modern GPU devices[5]. However, performance characteristics and programming models for GPU differ from those of traditional architectures and require new approaches. Most importantly, host memory and I/O systems are not directly accessible from a GPU device and GPU memory is usually an order of magnitude smaller than memory on a host. Considering the size of read alignment data, the memory limit becomes a real problem: when reference sequence index does not fit into memory it has to be split into chunks that will be processed individually. In most cases the complexity of the algorithm does not depend on the index size, so such index splitting increases computation time tremendously. Analysis of existing solutions for read alignment on GPU showed that memory limit is the chief performance issue. One of the attempts to reduce memory consumption consisted in replacing commonly used suffix tree, which allows for better theoretical performance of the algorithm [4], with suffix array, which is less efficient in terms of pure computational complexity but more compact. By doing this, authors of MummerGPU++ achieved several times better performance[3]. We suggest using Burrows-Wheeler Transform[1] for both index and the corresponding search algorithm to achieve much smaller memory footprint. This transform is used mainly in compression algorithms such as bzip2 as it replaces reoccurring patterns in the string by continuous runs of a single symbol, but it can be also used for pattern matching[2]. At the same time we continue using more traditional suffix array on host side to benefit from computational characteristics of both GPU and CPU. We reduced index size 12 times and just by doing this achieved 3-4 time performance improvement compared to suffix-array based solution MummerGPU++. Since even with this compressed index workload size can exceed available device memory we developed a performance model to analyze how overall execution time is affected by proportions and succession in memory is allocated for chunks of index and query set. This model allowed us to find best balance of memory allocation and double performance compared to naive approach when we allocate equal shares of memory for index and queries. The model is then applied to show that using multiple GPUs is a way not only to speed up application, but also to overcome some single-GPU performance issues and have super-linear scaling at least on number of GPUs typically available on one host.