The first law of robotics (a call to arms)
AAAI'94 Proceedings of the twelfth national conference on Artificial intelligence (vol. 2)
The UMASS intelligent home project
Proceedings of the third annual conference on Autonomous Agents
Reinforcement learning with hierarchies of machines
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Adjustable autonomy in real-world multi-agent environments
Proceedings of the fifth international conference on Autonomous agents
Decision-Theoretic, High-Level Agent Programming in the Situation Calculus
Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence
Journal of Artificial Intelligence Research
Journal of Artificial Intelligence Research
Representation and reasoning for DAML-based policy and domain services in KAoS and nomads
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
Towards adjustable autonomy for the real world
Journal of Artificial Intelligence Research
Asimovian multiagents: applying laws of robotics to teams of humans and agents
ProMAS'06 Proceedings of the 4th international conference on Programming multi-agent systems
Making agents acceptable to people abstract of a key-note speech
CEEMAS'03 Proceedings of the 3rd Central and Eastern European conference on Multi-agent systems
SAFECOMP'06 Proceedings of the 25th international conference on Computer Safety, Reliability, and Security
Hi-index | 0.00 |
The deployment of autonomous agents in real applications promises great benefits, but it also risks potentially great harm to humans who interact with these agents. Indeed, in many applications, agent designers pursue adjustable autonomy (AA) to enable agents to harness human skills when faced with the inevitable difficulties in making autonomous decisions. There are two key shortcomings in current AA research. First, current AA techniques focus on individual agent-human interactions, making assumptions that break down in settings with teams of agents. Second, humans who interact with agents want guarantees of safety, possibly beyond the scope of the agent's initial conception of optimal AA. Our approach to AA integrates Markov Decision Processes (MDPs) that are applicable in team settings, with support for explicit safety constraints on agents' behaviors. We introduce four types of safety constraints that forbid or require certain agent behaviors. The paper then presents a novel algorithm that enforces obedience of such constraints by modifying standard MDP algorithms for generating optimal policies. We prove that the resulting algorithm is correct and present results from a real-world deployment.