Improving index performance through prefetching

  • Authors:
  • Shimin Chen;Phillip B. Gibbons;Todd C. Mowry

  • Affiliations:
  • School of Computer Science, Carnegie Mellon University, Pittsburgh, PA;Information Sciences Research Center, Bell Laboratories, Murray Hill, NJ;School of Computer Science, Carnegie Mellon University, Pittsburgh, PA

  • Venue:
  • SIGMOD '01 Proceedings of the 2001 ACM SIGMOD international conference on Management of data
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper proposes and evaluate Prefetching B+-Trees (pB+-Trees), which use prefetching to accelerate two important operations on B+-Tree indices: searches and range scans. To accelerate searches, pB+-Trees use prefetching to effectively create wider nodes than the natural data transfer size: e.g., eight vs. one cache lines or disk pages. These wider nodes reduce the height of the B+-Tree, thereby decreasing the number of expensive misses when going from parent to child without significantly increasing the cost of fetching a given node. Our results show that this technique speeds up search and update times by a factor of 1.21-1.5 for main-memory B+-Trees. In addition, it outperforms and is complementary to “Cache-Sensitive B+-Trees.” To accelerate range scans, pB+-Trees provide arrays of pointers to their leaf nodes. These allow the pB+-Tree to prefetch arbitrarily far ahead, even for nonclustered indices, thereby hiding the normally expensive cache misses associated with traversing the leaves within the range. Our results show that this technique yields over a sixfold speedup on range scans of 1000+ keys. Although our experimental evaluation focuses on main memory databases, the techniques that we propose are also applicable to hiding disk latency.