Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Distributed Control for 3D Metamorphosis
Autonomous Robots
Self-reconfiguring robots: designs, algorithms, and applications
Self-reconfiguring robots: designs, algorithms, and applications
Heterogeneous self-reconfiguring robotics
Heterogeneous self-reconfiguring robotics
Multimode locomotion via SuperBot reconfigurable robots
Autonomous Robots
Experiments with a ZigBee wireless communication system for self-reconfiguring modular robots
ICRA'09 Proceedings of the 2009 IEEE international conference on Robotics and Automation
Design of prismatic cube modules for convex corner traversal in 3D
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
Roombots: reconfigurable robots for adaptive furniture
IEEE Computational Intelligence Magazine
Neural control of a modular multi-legged walking machine: Simulation and hardware
Robotics and Autonomous Systems
Robotics and Autonomous Systems
Hi-index | 0.00 |
Self-reconfiguring robots have the potential to explore highly variable terrain, operating as parallel groups or combining to surmount large obstacles. If the modules are at a smaller scale, they may also be able to physically render arbitrary shapes in an interactive way. In order to realize these capabilities, groups with large numbers of modules must be used, and algorithms to control such large groups must be extremely scalable in order to be executed on simple modules. In this work, we present an algorithm for locomotion of lattice-based self-reconfiguring robots that uses constant memory per module with execution times that are sublinear in the number of modules. The algorithm is inspired by reinforcement learning and uses dynamic programming to plan module paths in parallel. We have also developed a novel localized cooperation scheme that allows the modules to move both without disconnecting the system and with small amounts of communication. The combined algorithm is able to direct locomotion over arbitrary obstacles, and due to continuous replanning the goal can be moved at any time to `joystick' the robot over the environment. The formulation of the goal used in the planning also encourages dynamic stability. We have developed both centralized and decentralized implementations in simulation, as well as an implementation for the Superbot system, and present empirical results showing the sublinear nature of our technique.