Reconstructive expert system explanation
Artificial Intelligence
Explanation in second generation expert systems
Second generation expert systems
Explanations in Knowledge Systems: Design for Explainable Expert Systems
IEEE Expert: Intelligent Systems and Their Applications
Explanation: A Source of Guidance for Knowledge Representation
Knowledge Representation and Organization in Machine Learning
Knowledge-based system explanation: the ripple-down rules alternative
Knowledge and Information Systems
Rule Based Expert Systems: The Mycin Experiments of the Stanford Heuristic Programming Project (The Addison-Wesley series in artificial intelligence)
A Scalable Architecture for Cross-Modal Semantic Annotation and Retrieval
KI '08 Proceedings of the 31st annual German conference on Advances in Artificial Intelligence
RadSem: Semantic Annotation and Retrieval for Medical Images
ESWC 2009 Heraklion Proceedings of the 6th European Semantic Web Conference on The Semantic Web: Research and Applications
XPLAIN: a system for creating and explaining expert consulting programs
Artificial Intelligence
Intrinsic plagiarism detection
ECIR'06 Proceedings of the 28th European conference on Advances in Information Retrieval
Hi-index | 0.00 |
Explanation-aware software design aims at making software systems smarter in interactions with their users. The long-term goal is to provide methods and tools for systematically engineering understandability into the respective (knowledge-based) software system. In this paper, we describe how we improved a semantic search engine, i.e., RadSem, regarding understandability. The research project MEDICO aims at developing an intelligent, robust and scalable semantic search engine for medical documents. RadSem is based on formal ontologies and designated for different kinds of users. Since semantic search results are often hard to understand, an explanation facility for justifying and exploring search results was integrated into RadSem employing the same ontologies used for searching also for explanation generation. We evaluated the understandability of selected concept labels in an experiment with different user groups using semantic networks as form of depicting explanations and using a class frequency approach for selecting appropriate labels.