World modeling for the dynamic construction of real-time control plans
Artificial Intelligence
On social laws for artificial agent societies: off-line design
Artificial Intelligence - Special volume on computational research on interaction and agency, part 2
Planning and Resource Allocation for Hard Real-time, Fault-Tolerant Plan Execution
Autonomous Agents and Multi-Agent Systems
Sequential Optimality and Coordination in Multiagent Systems
IJCAI '99 Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence
Multiple Negotiations among Agents for a Distributed Meeting Scheduler
ICMAS '00 Proceedings of the Fourth International Conference on MultiAgent Systems (ICMAS-2000)
Agent interaction in distributed POMDPs and its implications on complexity
AAMAS '06 Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
The 'MECIMPLAN' Approach to Agent-Based Strategic Planning
WI-IATW '06 Proceedings of the 2006 IEEE/WIC/ACM international conference on Web Intelligence and Intelligent Agent Technology
Sensitivity analysis for distributed optimization with resource constraints
Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
Hi-index | 0.00 |
We study how agents can cooperate to revise their plans as they attempt to ensure that they do not over--utilize their local resource capacities. An agent in a multiagent environment should in principle be prepared for all environmental events as well as all events that could conceivably be caused by other agents' actions. The resource requirements to execute such omnipotent plans are usually overwhelming, however. Thus, an agent must decide which tasks to perform and which to ignore in the multiagent context. Our strategy is to have agents selectively communicate relevant details of their plans so that each gets a sufficiently accurate view of the events others might cause. Reducing uncertainties about the world trajectory improves the agents' resource allocation decisions and decreases their resource consumptions. In fact, our experiments over a sample domain show that, on average, 50% of an agent's initial actions are planned for states it can discover it will never reach. The protocol we develop in this paper thus discovers futile actions and reclaims resources that would otherwise be wasted.