A retrospective on the Wisconsin benchmark
Readings in database systems
Inquiry-Based Requirements Analysis
IEEE Software
Requirements Engineering: Processes and Techniques
Requirements Engineering: Processes and Techniques
Benchmarking Database Systems A Systematic Approach
VLDB '83 Proceedings of the 9th International Conference on Very Large Data Bases
Sesame: A Generic Architecture for Storing and Querying RDF and RDF Schema
ISWC '02 Proceedings of the First International Semantic Web Conference on The Semantic Web
LUBM: A benchmark for OWL knowledge base systems
Web Semantics: Science, Services and Agents on the World Wide Web
On logical consequence for collections of OWL documents
ISWC'05 Proceedings of the 4th international conference on The Semantic Web
Rapid benchmarking for semantic web knowledge base systems
ISWC'05 Proceedings of the 4th international conference on The Semantic Web
Semantics preserving SPARQL-to-SQL translation
Data & Knowledge Engineering
RDFProv: A relational RDF store for querying and managing scientific workflow provenance
Data & Knowledge Engineering
To cache or not to cache: the effects of warming cache in complex SPARQL queries
OTM'11 Proceedings of the 2011th Confederated international conference on On the move to meaningful internet systems - Volume Part II
PoweRGen: A power-law based generator of RDFS schemas
Information Systems
Ontology matching benchmarks: Generation, stability, and discriminability
Web Semantics: Science, Services and Agents on the World Wide Web
Semantic to intelligent web era: building blocks, applications, and current trends
Proceedings of the Fifth International Conference on Management of Emergent Digital EcoSystems
Hi-index | 0.00 |
A key challenge for the Semantic Web is to acquire the capability to effectively query large knowledge bases. As there will be several competing systems, we need benchmarks that will objectively evaluate these systems. Development of effective benchmarks in an emerging domain is a challenging endeavor. In this paper, we propose a requirements driven framework for developing benchmarks for Semantic Web Knowledge Base Systems (SW KBSs). In this paper, we make two major contributions. First, we provide a list of requirements for SW KBS benchmarks. This can serve as an unbiased guide to both the benchmark developers and personnel responsible for systems acquisition and benchmarking. Second, we provide an organized collection of techniques and tools needed to develop such benchmarks. In particular, the collection contains a detailed guide for generating benchmark workload, defining performance metrics, and interpreting experimental results.