Intention is choice with commitment
Artificial Intelligence
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Exploiting belief bounds: practical POMDPs for personal assistant agents
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
Using multiagent teams to improve the training of incident commanders
AAMAS '06 Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
Managing autonomy in robot teams: observations from four experiments
Proceedings of the ACM/IEEE international conference on Human-robot interaction
Solving generalized semi-Markov decision processes using continuous phase-type distributions
AAAI'04 Proceedings of the 19th national conference on Artifical intelligence
Lazy approximation for solving continuous finite-horizon MDPs
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 3
Hybrid BDI-POMDP framework for multiagent teaming
Journal of Artificial Intelligence Research
Towards adjustable autonomy for the real world
Journal of Artificial Intelligence Research
Function allocation for NextGen airspace via agents
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: Industry track
Agent-based coordination of human-multirobot teams in complex environments
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: Industry track
SAVES: a sustainable multiagent application to conserve building energy considering occupants
Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
Hi-index | 0.00 |
As agents begin to perform complex tasks alongside humans as collaborative teammates, it becomes crucial that the resulting human-multiagent teams adapt to time-critical domains. In such domains, adjustable autonomy has proven useful by allowing for a dynamic transfer of control of decision making between human and agents. However, existing adjustable autonomy algorithms commonly discretize time, which not only results in high algorithm runtimes but also translates into inaccurate transfer of control policies. In addition, existing techniques fail to address decision making inconsistencies often encountered in human multiagent decision making. To address these limitations, we present novel approach for Resolving Inconsistencies in Adjustable Autonomy in Continuous Time (RIAACT) that makes three contributions: First, we apply continuous time planning paradigm to adjustable autonomy, resulting in high-accuracy transfer of control policies. Second, our new adjustable autonomy framework both models and plans for the resolving of inconsistencies between human and agent decisions. Third, we introduce a new model, Interruptible Action Time-dependent Markov Decision Problem (IA-TMDP), which allows for actions to be interrupted at any point in continuous time. We show how to solve IA-TMDPs efficiently and leverage them to plan for the resolving of inconsistencies in RIAACT. Furthermore, these contributions have been realized and evaluated in a complex disaster response simulation system.