Proceedings of the seventh international conference (1990) on Machine learning
Made-up minds: a constructivist approach to artificial intelligence
Made-up minds: a constructivist approach to artificial intelligence
A possibility for implementing curiosity and boredom in model-building neural controllers
Proceedings of the first international conference on simulation of adaptive behavior on From animals to animats
Improving Generalization with Active Learning
Machine Learning - Special issue on structured connectionist systems
Selective Sampling Using the Query by Committee Algorithm
Machine Learning
An optimal algorithm for approximate nearest neighbor searching fixed dimensions
Journal of the ACM (JACM)
Exploration in active learning
The handbook of brain theory and neural networks
Between MDPs and semi-MDPs: a framework for temporal abstraction in reinforcement learning
Artificial Intelligence
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Efficient Global Optimization of Expensive Black-Box Functions
Journal of Global Optimization
Dopamine: generalization and bonuses
Neural Networks - Computational models of neuromodulation
Toward Optimal Active Learning through Sampling Estimation of Error Reduction
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Less is More: Active Learning with Support Vector Machines
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
breve: a 3D environment for the simulation of decentralized systems and artificial life
ICAL 2003 Proceedings of the eighth international conference on Artificial life
Apprenticeship learning via inverse reinforcement learning
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Completely Derandomized Self-Adaptation in Evolution Strategies
Evolutionary Computation
Incremental Online Learning in High Dimensions
Neural Computation
An intrinsic reward mechanism for efficient exploration
ICML '06 Proceedings of the 23rd international conference on Machine learning
Probabilistic inference for solving discrete and continuous state Markov Decision Processes
ICML '06 Proceedings of the 23rd international conference on Machine learning
Nonmyopic active learning of Gaussian processes: an exploration-exploitation approach
Proceedings of the 24th international conference on Machine learning
Active learning for logistic regression: an evaluation
Machine Learning
A self-organizing neural model of motor equivalent reaching and tool use by a multijoint arm
Journal of Cognitive Neuroscience
Neurocomputing
The Journal of Machine Learning Research
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
Reinforcement learning for robot soccer
Autonomous Robots
Active Learning for Reward Estimation in Inverse Reinforcement Learning
ECML PKDD '09 Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part II
Interactive policy learning through confidence-based autonomy
Journal of Artificial Intelligence Research
Active learning with statistical models
Journal of Artificial Intelligence Research
A Computational Model of Social-Learning Mechanisms
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
Autonomously learning an action hierarchy using a learned qualitative state representation
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Evolution and learning in an intrinsically motivated reinforcement learning robot
ECAL'07 Proceedings of the 9th European conference on Advances in artificial life
On-line regression algorithms for learning mechanical models of robots: A survey
Robotics and Autonomous Systems
Formal Theory of Creativity, Fun, and Intrinsic Motivation (1990–2010)
IEEE Transactions on Autonomous Mental Development
Goal Babbling Permits Direct Learning of Inverse Kinematics
IEEE Transactions on Autonomous Mental Development
R-IAC: Robust Intrinsically Motivated Exploration and Active Learning
IEEE Transactions on Autonomous Mental Development
Cognitive Developmental Robotics: A Survey
IEEE Transactions on Autonomous Mental Development
Intrinsically Motivated Reinforcement Learning: An Evolutionary Perspective
IEEE Transactions on Autonomous Mental Development
IEEE Transactions on Autonomous Mental Development
Intrinsic Motivation Systems for Autonomous Mental Development
IEEE Transactions on Evolutionary Computation
A Developmental Roadmap for Learning by Imitation in Robots
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Active graph matching based on pairwise probabilities between nodes
SSPR'12/SPR'12 Proceedings of the 2012 Joint IAPR international conference on Structural, Syntactic, and Statistical Pattern Recognition
Novelty and interestingness measures for design-space exploration
Proceedings of the 15th annual conference on Genetic and evolutionary computation
Design for a darwinian brain: part 2. cognitive architecture
Living Machines'13 Proceedings of the Second international conference on Biomimetic and Biohybrid Systems
Socially guided intrinsic motivation for robot learning of motor skills
Autonomous Robots
Hi-index | 0.00 |
We introduce the Self-Adaptive Goal Generation Robust Intelligent Adaptive Curiosity (SAGG-RIAC) architecture as an intrinsically motivated goal exploration mechanism which allows active learning of inverse models in high-dimensional redundant robots. This allows a robot to efficiently and actively learn distributions of parameterized motor skills/policies that solve a corresponding distribution of parameterized tasks/goals. The architecture makes the robot sample actively novel parameterized tasks in the task space, based on a measure of competence progress, each of which triggers low-level goal-directed learning of the motor policy parameters that allow to solve it. For both learning and generalization, the system leverages regression techniques which allow to infer the motor policy parameters corresponding to a given novel parameterized task, and based on the previously learnt correspondences between policy and task parameters. We present experiments with high-dimensional continuous sensorimotor spaces in three different robotic setups: (1) learning the inverse kinematics in a highly-redundant robotic arm, (2) learning omnidirectional locomotion with motor primitives in a quadruped robot, and (3) an arm learning to control a fishing rod with a flexible wire. We show that (1) exploration in the task space can be a lot faster than exploration in the actuator space for learning inverse models in redundant robots; (2) selecting goals maximizing competence progress creates developmental trajectories driving the robot to progressively focus on tasks of increasing complexity and is statistically significantly more efficient than selecting tasks randomly, as well as more efficient than different standard active motor babbling methods; (3) this architecture allows the robot to actively discover which parts of its task space it can learn to reach and which part it cannot.