Reflections on five years of evaluating semantic search systems

  • Authors:
  • Victoria Uren;Marta Sabou;Enrico Motta;Miriam Fernandez;Vanessa Lopez;Yuangui Lei

  • Affiliations:
  • Department of Computer Science, The University of Sheffield, Regent Court, 211 Portobello, Sheffield, S1 4DP, UK/ and Knowledge Media Institute, The Open University, Milton Keynes, MK7 6AA, UK.;Knowledge Media Institute, The Open University, Milton Keynes, MK7 6AA, UK.;Knowledge Media Institute, The Open University, Milton Keynes, MK7 6AA, UK.;Knowledge Media Institute, The Open University, Milton Keynes, MK7 6AA, UK.;Knowledge Media Institute, The Open University, Milton Keynes, MK7 6AA, UK.;Accelrys Software Inc., 334 Cambridge Science Park, CB4 0WN, UK/ and Knowledge Media Institute, The Open University, Milton Keynes, MK7 6AA, UK

  • Venue:
  • International Journal of Metadata, Semantics and Ontologies
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Evaluations of semantic search systems are generally small scale and ad hoc due to the lack of appropriate resources such as test collections, agreed performance criteria and independent judgements of performance. By analysing our work in building and evaluating semantic tools over the last five years, we conclude that the growth of the semantic web led to an improvement in the available resources and the consequent robustness of performance assessments. We propose two directions for continuing evaluation work: the development of extensible evaluation benchmarks and the use of logging parameters for evaluating individual components of search systems.