SIGMOD '93 Proceedings of the 1993 ACM SIGMOD international conference on Management of data
Graph indexing: a frequent structure-based approach
SIGMOD '04 Proceedings of the 2004 ACM SIGMOD international conference on Management of data
Closure-Tree: An Index Structure for Graph Queries
ICDE '06 Proceedings of the 22nd International Conference on Data Engineering
Scalable semantic web data management using vertical partitioning
VLDB '07 Proceedings of the 33rd international conference on Very large data bases
The repeatability experiment of SIGMOD 2008
ACM SIGMOD Record
Provenance: The Bridge Between Experiments and Data
Computing in Science and Engineering
Column-store support for RDF data management: not all swans are white
Proceedings of the VLDB Endowment
XML compression techniques: A survey and comparison
Journal of Computer and System Sciences
SP^2Bench: A SPARQL Performance Benchmark
ICDE '09 Proceedings of the 2009 IEEE International Conference on Data Engineering
A comparison of approaches to large-scale data analysis
Proceedings of the 2009 ACM SIGMOD International Conference on Management of data
Performance Evaluation and Benchmarking
The Art of Building a Good Benchmark
Performance Evaluation and Benchmarking
Issues in Benchmark Metric Selection
Performance Evaluation and Benchmarking
A Systematic Process for Developing High Quality SaaS Cloud Services
CloudCom '09 Proceedings of the 1st International Conference on Cloud Computing
Managing and Mining Graph Data
Managing and Mining Graph Data
Relational processing of RDF queries: a survey
ACM SIGMOD Record
An experimental evaluation of relational RDF storage and querying techniques
DASFAA'10 Proceedings of the 15th international conference on Database systems for advanced applications
Liquid benchmarks: benchmarking-as-a-service
Proceedings of the 11th annual international ACM/IEEE joint conference on Digital libraries
Hi-index | 0.00 |
Experimental evaluation and comparison of techniques, algorithms, approaches or complete systems is a crucial requirement to assess the practical impact of research results. The quality of published experimental results is usually limited due to several reasons such as: limited time, unavailability of standard benchmarks or shortage of computing resources. Moreover, achieving an independent, consistent, complete and insightful assessment for different alternatives in the same domain is a time and resource consuming task in addition to its requirement to be periodically repeated to maintain its freshness and being up-to-date. In this paper, we coin the notion of Liquid Benchmarks as online and public services that provide collaborative platforms to unify efforts of peer researchers from all over the world to simplify their task in performing high quality experimental evaluations and guarantee a transparent scientific crediting process.