Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Learning the goal relevance of actions in classifier systems
ECAI '92 Proceedings of the 10th European conference on Artificial intelligence
Relevance from an epistemic perspective
Artificial Intelligence - Special issue on relevance
Introduction to Multiagent Systems
Introduction to Multiagent Systems
Relevance sensitive belief structures
Annals of Mathematics and Artificial Intelligence
Programming Multi-Agent Systems in AgentSpeak using Jason (Wiley Series in Agent Technology)
Programming Multi-Agent Systems in AgentSpeak using Jason (Wiley Series in Agent Technology)
Goal generation with relevant and trusted beliefs
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
Artifacts in the A&A meta-model for multi-agent systems
Autonomous Agents and Multi-Agent Systems
The Benefits of Surprise in Dynamic Environments: From Theory to Practice
ACII '07 Proceedings of the 2nd international conference on Affective Computing and Intelligent Interaction
Action and planning in embedded agents
Robotics and Autonomous Systems
Introducing the tileworld: experimentally evaluating agent architectures
AAAI'90 Proceedings of the eighth National conference on Artificial intelligence - Volume 1
Conditional learning of rules and plans by knowledge exchange in logical agents
RuleML'2011 Proceedings of the 5th international conference on Rule-based reasoning, programming, and applications
WI-IAT '11 Proceedings of the 2011 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology - Volume 02
Hi-index | 0.00 |
Artificial agents engaged in real world applications require accurate allocation strategies in order to better balance the use of their bounded resources. In particular, during their epistemic activities, they should be able to filter out all irrelevant information and just consider what is relevant for the current task that they are trying to solve. The aim of this work is to propose a mechanism of relevance-based belief update to be implemented in a BDI cognitive agent. This is in order to improve the performance of agents in information-rich environments. In the first part of the paper we present the formal and abstract model of the mechanism. In the second part we present its implementation in the Jason programming platform and we discuss its performance in simulation trials.