Representation and reasoning for goals in BDI agents
ACSC '02 Proceedings of the twenty-fifth Australasian conference on Computer science - Volume 4
Goals in agent systems: a unifying framework
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 2
A unified cognitive architecture for physical agents
AAAI'06 proceedings of the 21st national conference on Artificial intelligence - Volume 2
Activity scheduling for a robotic caretaker agent for the elderly
International Journal of Intelligent Information and Database Systems
Goal representation for BDI agent systems
ProMAS'04 Proceedings of the Second international conference on Programming Multi-Agent Systems
Hi-index | 0.00 |
An intelligent agent situated in some environment needs to know the preferred states it is expected to achieve so that it can work towards achieving them. The preferred states the agent has selected to achieve at a given time are its"goals". One popular approach for deciding which preferred state to adopt as goal at a given time is to assign utility values to these states and then choose the one with the highest utility at a given time. However a preferred state can be useful to a varying degree depending upon the situation the agent is in and hence such static utility cannot represent its usefulness indifferent situations. In this paper we propose an approach of representing utility of preferred states based on the concept of motivations which adjusts their utility according to the situation the agent is in.