Conditioning in possibility theory with strict order norms
Fuzzy Sets and Systems
A logic-based theory of deductive arguments
Artificial Intelligence
Belief Revision Through the Belief-Function Formalism in a Multi-Agent Environment
ECAI '96 Proceedings of the Workshop on Intelligent Agents III, Agent Theories, Architectures, and Languages
Artificial Intelligence - Special issue: Fuzzy set and possibility theory-based methods in artificial intelligence
Artificial argument assistants for defeasible argumentation
Artificial Intelligence - Special issue on AI and law
A Game-Theoretic Measure of Argument Strength for Abstract Argumentation
JELIA '08 Proceedings of the 11th European conference on Logics in Artificial Intelligence
An Algorithm for Computing Semi-stable Semantics
ECSQARU '07 Proceedings of the 9th European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty
An integrated possibilistic framework for goal generation in cognitive agents
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Change in abstract argumentation frameworks: adding an argument
Journal of Artificial Intelligence Research
On the issue of reinstatement in argumentation
JELIA'06 Proceedings of the 10th European conference on Logics in Artificial Intelligence
Graded Reinstatement in Belief Revision
WI-IAT '11 Proceedings of the 2011 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology - Volume 02
An argumentation-based approach to cooperative multi-source epistemic conflict resolution
MATES'12 Proceedings of the 10th German conference on Multiagent System Technologies
A socio-cognitive model of trust using argumentation theory
International Journal of Approximate Reasoning
Coalitions of Arguments: An Approach with Constraint Programming
Fundamenta Informaticae - Special Issue on the Italian Conference on Computational Logic: CILC 2011
Hi-index | 0.00 |
We address the issue, in cognitive agents, of possible loss of previous information, which later might turn out to be correct when new information becomes available. To this aim, we propose a framework for changing the agent's mind without erasing forever previous information, thus allowing its recovery in case the change turns out to be wrong. In this new framework, a piece of information is represented as an argument which can be more or less accepted depending on the trustworthiness of the agent who proposes it. We adopt possibility theory to represent uncertainty about the information, and to model the fact that information sources can be only partially trusted. The originality of the proposed framework lies in the following two points: (i) argument reinstatement is mirrored in belief reinstatement in order to avoid the loss of previous information; (ii) new incoming information is represented under the form of arguments and it is associated with a plausibility degree depending on the trustworthiness of the information source.