Explanation-based learning: a survey of programs and perspectives
ACM Computing Surveys (CSUR)
ACM Computing Surveys (CSUR)
An analysis of formal inter-agent dialogues
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1
Learning of Simple Conceptual Graphs from Positive and Negative Examples
PKDD '99 Proceedings of the Third European Conference on Principles of Data Mining and Knowledge Discovery
An Analysis of Online Customer Complaints: Implications for Web Complaint Management
HICSS '02 Proceedings of the 35th Annual Hawaii International Conference on System Sciences (HICSS'02)-Volume 7 - Volume 7
Artificial Intelligence - Special issue on AI and law
Defeasible logic programming: an argumentative approach
Theory and Practice of Logic Programming
Movie Review Mining: a Comparison between Supervised and Unsupervised Classification Approaches
HICSS '05 Proceedings of the Proceedings of the 38th Annual Hawaii International Conference on System Sciences (HICSS'05) - Track 4 - Volume 04
Opinion observer: analyzing and comparing opinions on the Web
WWW '05 Proceedings of the 14th international conference on World Wide Web
Thumbs up or thumbs down?: semantic orientation applied to unsupervised classification of reviews
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Thumbs up?: sentiment classification using machine learning techniques
EMNLP '02 Proceedings of the ACL-02 conference on Empirical methods in natural language processing - Volume 10
Journal of Intelligent Information Systems
Reasoning about attitudes of complaining customers
Knowledge-Based Systems
Learning Common Outcomes of Communicative Actions Represented by Labeled Graphs
ICCS '07 Proceedings of the 15th international conference on Conceptual Structures: Knowledge Architectures for Smart Applications
ICCS '08 Proceedings of the 16th international conference on Conceptual Structures: Knowledge Visualization and Reasoning
Learning communicative actions of conflicting human agents
Journal of Experimental & Theoretical Artificial Intelligence
Opinion Mining and Sentiment Analysis
Foundations and Trends in Information Retrieval
Argumentation Semantics for Temporal Defeasible Logic
Proceedings of the 2006 conference on STAIRS 2006: Proceedings of the Third Starting AI Researchers' Symposium
Stochastic search and phase transitions: AI meets physics
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 1
Concept-based learning of human behavior for customer relationship management
Information Sciences: an International Journal
Analyzing conflicts with concept-based learning
ICCS'05 Proceedings of the 13th international conference on Conceptual Structures: common Semantics for Sharing Knowledge
Cognitive Systems Research
Exhaustive simulation of consecutive mental states of human agents
Knowledge-Based Systems
A web mining tool for assistance with creative writing
ECIR'13 Proceedings of the 35th European conference on Advances in Information Retrieval
Transfer learning of syntactic structures for building taxonomies for search engines
Engineering Applications of Artificial Intelligence
Hi-index | 0.00 |
This paper focuses on explanations in behavioral scenarios that involve conflicting agents. In these scenarios, implicit of or explicit conflict can be caused by contradictory agents' interests, as communicated in their explanations for why they behaved in a particular way, by a lack of knowledge of the situation, or by a mixture of explanations of multiple factors. We argue that in many cases to assess the plausibility of explanations, we must analyze two following components and their interrelations: (1) explanation at the actual object level (explanation itself) and (2) explanation at the higher level (meta-explanation). Comparative analysis of the roles of both is conducted to assess the plausibility of how agents explain the scenarios of their interactions. Object-level explanation assesses the plausibility of individual claims by using a traditional approach to handle argumentative structure of a dialog. Meta-explanation links the structure of a current scenario with that of previously learned scenarios of multi-agent interaction. The scenario structure includes agents' communicative actions and argumentation defeat relations between the subjects of these actions. We build a system where data for both object-level and meta-explanation are visually specified, to assess a plausibility of how agent behavior in a scenario is explained. We verify that meta-explanation in the form of machine learning of scenario structure should be augmented by conventional explanation by finding arguments in the form of defeasibility analysis of individual claims, to increase the accuracy of plausibility assessment. We also define a ratio between object-level and meta-explanation as the relative accuracy of plausibility assessment based on the former and latter sources. We then observe that groups of scenarios can be clustered based on this ratio; hence, such a ratio is an important parameter of human behavior associated with explaining something to other humans.