Localizing a Robot with Minimum Travel
SIAM Journal on Computing
A multiagent reinforcement learning algorithm by dynamically merging markov decision processes
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 2
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Reinforcement Learning in the Multi-Robot Domain
Autonomous Robots
A frontier-based approach for autonomous exploration
CIRA '97 Proceedings of the 1997 IEEE International Symposium on Computational Intelligence in Robotics and Automation
Approximate Solutions for Partially Observable Stochastic Games with Common Payoffs
AAMAS '04 Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems - Volume 1
Coordination in ambiguity: coordinated active localization for multiple robots
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems: demo papers
Hi-index | 0.00 |
In environments with identical features, the global localization of a robot, might result in multiple hypotheses of its location. If the situation is extrapolated to multiple robots, it results in multiple hypotheses for multiple robots. The localization is facilitated if the robots are actively guided towards locations where it can use other robots as well as obstacles to localize itself. This paper aims at presenting a learning technique for the above process of active localization of multiple robots by co-operation. An MDP framework is used for learning the task, over a semi-decentralized team of robots hereby maintaining a bounded complexity as opposed to various multi-agent learning techniques, which scale exponentially with the increase in the number of robots.