Nomograms for visualization of naive Bayesian classifier
PKDD '04 Proceedings of the 8th European Conference on Principles and Practice of Knowledge Discovery in Databases
Accurate activity recognition in a home setting
UbiComp '08 Proceedings of the 10th international conference on Ubiquitous computing
Fixing the program my computer learned: barriers for end users, challenges for the machine
Proceedings of the 14th international conference on Intelligent user interfaces
Support for context-aware intelligibility and control
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Visual explanation of evidence in additive classifiers
IAAI'06 Proceedings of the 18th conference on Innovative applications of artificial intelligence - Volume 2
Assessing demand for intelligibility in context-aware applications
Proceedings of the 11th international conference on Ubiquitous computing
The WEKA data mining software: an update
ACM SIGKDD Explorations Newsletter
PersonisAD: distributed, active, scrutable model framework for context-aware services
PERVASIVE'07 Proceedings of the 5th international conference on Pervasive computing
Toolkit to support intelligibility in context-aware applications
Proceedings of the 12th ACM international conference on Ubiquitous computing
IE '10 Proceedings of the 2010 Sixth International Conference on Intelligent Environments
Investigating intelligibility for uncertain context-aware applications
Proceedings of the 13th international conference on Ubiquitous computing
Design of an intelligible mobile context-aware application
Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services
Hi-index | 0.00 |
Smart environments are improving their performance and services by increasingly using ubiquitous sensing and complex inference mechanisms. However, this comes at a cost of reduced intelligibility, user trust and control. The Intelligibility Toolkit was developed to support the automatic generation and provision of explanations to help users understand context-aware inference. We have extended the toolkit to generate explanations for a wider range of inference models and to provide two styles of explanations --- rule traces and weights of evidence. We describe explanations generated from several inference models for a smart home dataset for activity recognition. This demonstrates the versatility of using the Intelligibility Toolkit to retain explanatory capabilities across different inference models.