KADS: a modelling approach to knowledge engineering
Knowledge Acquisition - Special issue on the KADS approach to knowledge engineering
CYC: a large-scale investment in knowledge infrastructure
Communications of the ACM
Validating knowledge acquisition: multiple classification ripple-down rules
Validating knowledge acquisition: multiple classification ripple-down rules
Incremental knowledge acquisition for search control heuristics
Incremental knowledge acquisition for search control heuristics
Adaptive Web Document Classification with MCRDR
ITCC '04 Proceedings of the International Conference on Information Technology: Coding and Computing (ITCC'04) Volume 2 - Volume 2
Epistemological Approach to the Process of Practice
Minds and Machines
An Approach for Generalising Symbolic Knowledge
AI '08 Proceedings of the 21st Australasian Joint Conference on Artificial Intelligence: Advances in Artificial Intelligence
Detecting the Knowledge Boundary with Prudence Analysis
AI '08 Proceedings of the 21st Australasian Joint Conference on Artificial Intelligence: Advances in Artificial Intelligence
Two decades of ripple down rules research
The Knowledge Engineering Review
Detection of CAN by ensemble classifiers based on ripple down rules
PKAW'12 Proceedings of the 12th Pacific Rim conference on Knowledge Management and Acquisition for Intelligent Systems
Run-time validation of knowledge-based systems
Proceedings of the seventh international conference on Knowledge capture
Hi-index | 12.05 |
Prudence analysis (PA) is a relatively new, practical and highly innovative approach to solving the problem of brittleness in knowledge based system (KBS) development. PA is essentially an online validation approach where as each situation or case is presented to the KBS for inferencing the result is simultaneously validated. Therefore, instead of the system simply providing a conclusion, it also provides a warning when the validation fails. Previous studies have shown that a modification to multiple classification ripple-down rules (MCRDR) referred to as rated MCRDR (RM) has been able to achieve strong and flexible results in simulated domains with artificial data sets. This paper presents a study into the effectiveness of RM in an eHealth document monitoring and classification domain using human expertise. Additionally, this paper also investigates what affect PA has when the KBS developer relied entirely on the warnings for maintenance. Results indicate that the system is surprisingly robust even when warning accuracy is allowed to drop quite low. This study of a previously little touched area provides a strong indication of the potential for future knowledge based system development.