A heuristic approach for solving decentralized-POMDP: assessment on the pursuit problem
Proceedings of the 2002 ACM symposium on Applied computing
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Coordination for Multi-Robot Exploration and Mapping
Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence
Exploring artificial intelligence in the new millennium
Some simplified NP-complete problems
STOC '74 Proceedings of the sixth annual ACM symposium on Theory of computing
Real-time hierarchical POMDPs for autonomous robot navigation
Robotics and Autonomous Systems
Decomposition techniques for planning in stochastic domains
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
The complexity of decentralized control of Markov decision processes
UAI'00 Proceedings of the Sixteenth conference on Uncertainty in artificial intelligence
Flexible decomposition algorithms for weakly coupled Markov decision problems
UAI'98 Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence
Swarming behavior using probabilistic roadmap techniques
SAB'04 Proceedings of the 2004 international conference on Swarm Robotics
Coordinated multi-robot exploration
IEEE Transactions on Robotics
Hi-index | 0.00 |
In this paper, an approach is presented to automatically allocate a set of exploration tasks between a fleet of mobile robots. The approach combines a Road-Map technique and Markovian Decision Processes MDPs. The addressed problem consists of exploring an area where a set of points of interest characterizes the main positions to be visited by the robots. This problem induces a long term horizon motion planning with a combinatorial explosion. The Road-Map allows the robots to represent their spatial knowledge as a graph of way-points connected by paths. It can be modified during the exploration mission requiring the robots to use on-line computations. By decomposing the Road-Map into regions, an MDP allows the current group leader to evaluate the interest of each robot in every single region. Using those values, the leader can assign the exploration tasks to the robots.