Common metrics for human-robot interaction
Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction
Developing performance metrics for the supervisory control of multiple robots
Proceedings of the ACM/IEEE international conference on Human-robot interaction
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Measurement techniques for multiagent systems
PerMIS '08 Proceedings of the 8th Workshop on Performance Metrics for Intelligent Systems
Journal of Intelligent and Robotic Systems
Evaluating autonomous ground-robots
Journal of Field Robotics
Hi-index | 0.00 |
One of the key issues in space exploration is that of deciding what space tasks are best done with humans, with robots, or a suitable combination of each. In general, human and robot skills are complementary. Humans provide as yet unmatched capabilities to perceive, think, and act when faced with anomalies and unforeseen events, but there can be huge potential risks to human safety in getting these benefits. Robots provide complementary skills in being able to work in extremely risky environments, but their ability to perceive, think, and act by themselves is currently not error-free, although these capabilities are continually improving with the emergence of new technologies. Substantial past experience validates these generally qualitative notions. However, there is a need for more rigorously systematic evaluation of human and robot roles, in order to optimize the design and performance of human-robot system architectures using well-defined performance evaluation metrics. This article summarizes a new analytical method to conduct such quantitative evaluations. While the article focuses on evaluating human-robot systems, the method is generally applicable to a much broader class of systems whose performance needs to be evaluated.