Intelligent control of life support systems for space habitats
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
Proceedings of the 1st international conference on Knowledge capture
Agent Autonomy
Managing autonomy in robot teams: observations from four experiments
Proceedings of the ACM/IEEE international conference on Human-robot interaction
Coordinating Agents in Organizations Using Social Commitments
Electronic Notes in Theoretical Computer Science (ENTCS)
A model for types and levels of human interaction with automation
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Hi-index | 0.00 |
In the context of supervisory control of one or several artificial agents by a human operator, the definition of the autonomy of an agent remains a major challenge. When the mission is critical and in a real-time environment, e.g. in the case of unmanned vehicles, errors are not permitted while performance must be as high as possible. Therefore, a trade-off must be found between manual control, usually ensuring good confidence in the system but putting a high workload on the operator, and full autonomy of the agents, often leading to less reliability in uncertain environments and lower performance. Having an operator in the decision loop does not always grant maximal performance and safety anyway, as human beings are fallible. Additionally, when an agent and a human decide and act simultaneously using the same resources, conflicts are likely to occur and coordination between entities is mandatory. We present the basic concepts of an approach aiming at dynamically adjusting the autonomy of an agent in a mission relatively to its operator, based on a formal modelling of mission ingredients.