OpenRuleBench: an analysis of the performance of rule engines

  • Authors:
  • Senlin Liang;Paul Fodor;Hui Wan;Michael Kifer

  • Affiliations:
  • State University of New York at Stony Brook, Stony Brook, NY, USA;State University of New York at Stony Brook, Stony Brook, NY, USA;State University of New York at Stony Brook, Stony Brook, NY, USA;State University of New York at Stony Brook, Stony Brook, NY, USA

  • Venue:
  • Proceedings of the 18th international conference on World wide web
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

The Semantic Web initiative has led to an upsurge of the interest in rules as a general and powerful way of processing, combining, and analyzing semantic information. Since several of the technologies underlying rule-based systems are already quite mature, it is important to understand how such systems might perform on the Web scale. OpenRuleBench is a suite of benchmarks for analyzing the performance and scalability of different rule engines. Currently the study spans five different technologies and eleven systems, but OpenRuleBench is an open community resource, and contributions from the community are welcome. In this paper, we describe the tested systems and technologies, the methodology used in testing, and analyze the results.