Intelligence without representation
Artificial Intelligence
Automatic programming of behavior-based robots using reinforcement learning
Artificial Intelligence
Robot shaping: developing autonomous agents through learning
Artificial Intelligence
Reinforcement Learning in the Multi-Robot Domain
Autonomous Robots
Interaction and Intelligent Behavior
Interaction and Intelligent Behavior
Reinforcement learning with selective perception and hidden state
Reinforcement learning with selective perception and hidden state
Reinforcement learning: a survey
Journal of Artificial Intelligence Research
Interference as a tool for designing and evaluating multi-robot controllers
AAAI'97/IAAI'97 Proceedings of the fourteenth national conference on artificial intelligence and ninth conference on Innovative applications of artificial intelligence
Evolution of homing navigation in a real mobile robot
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Rapid, safe, and incremental learning of navigation strategies
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Hidden state and reinforcement learning with instance-based stateidentification
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Coordinating mobile robot group behavior using a model of interaction dynamics
Proceedings of the third annual conference on Autonomous Agents
Reward maximization in a non-stationary mobile robot environment
AGENTS '00 Proceedings of the fourth international conference on Autonomous agents
Acquiring Mobile Robot Behaviors by Learning Trajectory Velocities
Autonomous Robots
Mobile Robot Learning by Self-Observation
Autonomous Robots
Hi-index | 0.00 |
Learning in the mobile robot domain is a very challenging task, especially in nonstationary conditions. The behavior-based approach has proven to be useful in making mobile robots work in real-world situations. Since the behaviors are responsible for managing the interactions between the robots and its environment, observing their use can be exploited to model these interactions. In our approach, the robot is initially given a set of “behavior-producing” modules to choose from, and the algorithm provides a memory-based approach to dynamically adapt the selection of these behaviors according to the history of their use. The approach is validated using a vision- and sonar-based Pioneer I robot in nonstationary conditions, in the context of a multirobot foraging task. Results show the effectiveness of the approach in taking advantage of any regularities experienced in the world, leading to fast and adaptable specialization for the learning robot.