A Requirements Driven Framework for Benchmarking Semantic Web Knowledge Base Systems
IEEE Transactions on Knowledge and Data Engineering
Ontology Matching
Using Bayesian decision for ontology mapping
Web Semantics: Science, Services and Agents on the World Wide Web
XBenchMatch: a benchmark for XML schema matching tools
VLDB '07 Proceedings of the 33rd international conference on Very large data bases
Falcon-AO: A practical ontology matching system
Web Semantics: Science, Services and Agents on the World Wide Web
A large dataset for the evaluation of ontology matching
The Knowledge Engineering Review
An efficient and scalable algorithm for segmented alignment of ontologies of arbitrary size
Web Semantics: Science, Services and Agents on the World Wide Web
Benchmarking matching applications on the semantic Web
ESWC'11 Proceedings of the 8th extended semantic web conference on The semanic web: research and applications - Volume Part II
LogMap: logic-based and scalable ontology matching
ISWC'11 Proceedings of the 10th international conference on The semantic web - Volume Part I
Matching large ontologies based on reduction anchors
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Three
Hi-index | 0.00 |
The OAEI Benchmark test set has been used for many years as a main reference to evaluate and compare ontology matching systems. However, this test set has barely varied since 2004 and has become a relatively easy task for matchers. In this paper, we present the design of a flexible test generator based on an extensible set of alterators which may be used programmatically for generating different test sets from different seed ontologies and different alteration modalities. It has been used for reproducing Benchmark both with the original seed ontology and with other ontologies. This highlights the remarkable stability of results over different generations and the preservation of difficulty across seed ontologies, as well as a systematic bias towards the initial Benchmark test set and the inability of such tests to identify an overall winning matcher. These were exactly the properties for which Benchmark had been designed. Furthermore, the generator has been used for providing new test sets aiming at increasing the difficulty and discriminability of Benchmark. Although difficulty may be easily increased with the generator, attempts to increase discriminability proved unfruitful. However, efforts towards this goal raise questions about the very nature of discriminability.