Sketching for military courses of action diagrams
Proceedings of the 8th international conference on Intelligent user interfaces
Supporting plan authoring and analysis
Proceedings of the 8th international conference on Intelligent user interfaces
WhyNot: debugging failed queries in large knowledge bases
Eighteenth national conference on Artificial intelligence
Knowledge formation and dialogue using the KRAKEN toolset
Eighteenth national conference on Artificial intelligence
A web-based ontology browsing and editing system
Eighteenth national conference on Artificial intelligence
Fusion: A System For Business Users To Manage Program Variability
IEEE Transactions on Software Engineering
Companion cognitive systems: a step toward human-level AI
AI Magazine - Special issue on achieving human-level AI through integrated systems and research
Incorporating tutoring principles into interactive knowledge acquisition
International Journal of Human-Computer Studies
Hi-index | 0.00 |
Eliciting complex logical rules directly from logic-naive subject matter experts (SMEs) is a challenging knowledge capture task. We describe a large-scale experiment to evaluate tools designed to produce SME-authored rule bases. We assess the quality of the rule bases with respect to the: 1) performance on the addressed functional task (military course of action (COA) critiquing); and 2) intrinsic knowledge representation quality. In the course of this assessment, we note both strengths and weaknesses in the state of the art, and accordingly suggest some foci for future development in this important technology area.