The Michigan Benchmark: A Microbenchmark for XML Query Processing Systems
Proceedings of the VLDB 2002 Workshop EEXTT and CAiSE 2002 Workshop DTWeb on Efficiency and Effectiveness of XML Tools and Techniques and Data Integration over the Web-Revised Papers
XMach-1: A Benchmark for XML Data Management
Datenbanksysteme in Büro, Technik und Wissenschaft (BTW), 9. GI-Fachtagung,
XBench Benchmark and Performance Testing of XML DBMSs
ICDE '04 Proceedings of the 20th International Conference on Data Engineering
MonetDB/XQuery: a fast XQuery processor powered by a relational engine
Proceedings of the 2006 ACM SIGMOD international conference on Management of data
XCheck: a platform for benchmarking XQuery engines
VLDB '06 Proceedings of the 32nd international conference on Very large data bases
XMark: a benchmark for XML data management
VLDB '02 Proceedings of the 28th international conference on Very Large Data Bases
Implementing XQuery 1.0: the Galax experience
VLDB '03 Proceedings of the 29th international conference on Very large data bases - Volume 29
XPathMark: an XPath benchmark for the XMark generated data
XSym'05 Proceedings of the Third international conference on Database and XML Technologies
MemBeR: a micro-benchmark repository for XQuery
XSym'05 Proceedings of the Third international conference on Database and XML Technologies
Hi-index | 0.00 |
This paper presents an extensive and detailed experimental evaluation of XQuery processors. The study consists of running five publicly available XQuery benchmarks-the Michigan benchmark (MBench), XBench, XMach-1, XMark and X007-on six XQuery processors, three stand-alone (file-based) XQuery processors (Galax, Qizx/Open, Saxon-B) and three XML/XQuery database systems (BerkeleyDB/XML, MonetDB/XQuery, X-Hive/DB). Next to assessing and comparing the functionality, performance and scalability for the various systems, the major focus of this work is to report in detail about the experiences made while performing such an exhaustive study, to discuss all the problems that we encountered and how we solved them, and hence to hopefully provide some guidelines (or even a recipe) for performing reproducible large-scale experimental research and system evaluation.