A natural language front-end for knowledge acquisition
ACM SIGART Bulletin - Special issue on knowledge acquisition
The taming of an expert: an anecdotal report
ACM SIGART Bulletin - Special issue on knowledge acquisition
A philosophical basis for knowledge acquisition
Knowledge Acquisition
Learning rules with local exceptions
Euro-COLT '93 Proceedings of the first European conference on Computational learning theory
Verification and validation with ripple-down rules
International Journal of Human-Computer Studies - Special issue: verification and validation
Machine Learning
Acquisition of Search Knowledge
EKAW '97 Proceedings of the 10th European Workshop on Knowledge Acquisition, Modeling and Management
Holism and Incremental Knowledge Acquisition
EKAW '99 Proceedings of the 11th European Workshop on Knowledge Acquisition, Modeling and Management
Simultaneous Modelling and Knowledge Acquisition Using NRDR
PRICAI '98 Proceedings of the 5th Pacific Rim International Conference on Artificial Intelligence: Topics in Artificial Intelligence
Practical Evaluation of an Organizational Memory Using the Goal-Question-Metric Technique
XPS '99 Proceedings of the 5th Biannual German Conference on Knowledge-Based Systems: Knowledge-Based Systems - Survey and Future Directions
Combining knowledge acquisition and machine learning to control dynamic systems
IJCAI'97 Proceedings of the Fifteenth international joint conference on Artifical intelligence - Volume 2
Categorization in unsupervised neural networks: the Eidos model
IEEE Transactions on Neural Networks
An OO Model for Incremental Hierarchical KA
EKAW '02 Proceedings of the 13th International Conference on Knowledge Engineering and Knowledge Management. Ontologies and the Semantic Web
Hi-index | 0.00 |
Evaluating the success of a knowledge acquisition (KA) task is difficult and expensive. Most evaluation approaches rely on the expert themselves, either directly, or indirectly by relying on data previously prepared with the help of experts. In incremental KA, knowledge base (KB) errors are monitored and corrected by an expert. Thus, during its evolution a record of the knowledge based system (KBS) performance is usually easy to keep. We propose to integrate with the incremental KA process, an evaluation process based on a statistical analysis to estimate the effectiveness of the KBS, as the KBS is actually evolved. We tailor such an analysis for Ripple Down Rules (RDR), which is an effective incremental KA methodology where a record of the KBS performance can be easily derived and updated as new cases are processed by the system. An RDR KB is a collection of rules with hierarchical exceptions, which are entered and validated by the expert in the context of their use. This greatly facilitates the knowledge maintenance task which, characteristically in RDR, overlaps with the incremental KA process. The work in this paper aims to overlap evaluation with maintenance and development of the knowledge base. It also minimises the major expense in deploying the RDR KBS, that of keeping a domain expert on-line during maintenance and the initial period of deployment. The expert is not kept on-line longer than it is absolutely necessary. We use the structure and semantics of an evolving RDR KB, combined with proven machine learning statistical methods, to estimate the added value in every KB update, as the KB evolves. Using these values, the decisionmakers in the organisation employing the KBS can apply a cost-benefit analysis of the continuation of the incremental KA process. They can then determine when this process, involving keeping an expert on-line, should be terminated.