Creating advice-taking reinforcement learners
Machine Learning - Special issue on reinforcement learning
Evolving neural networks through augmenting topologies
Evolutionary Computation
Apprenticeship learning via inverse reinforcement learning
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Apprenticeship learning for helicopter control
Communications of the ACM - Barbara Liskov: ACM's A.M. Turing Award Winner
Interactively shaping agents via human reinforcement: the TAMER framework
Proceedings of the fifth international conference on Knowledge capture
Bayesian inverse reinforcement learning
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Learning drivers for TORCS through imitation using supervised methods
CIG'09 Proceedings of the 5th international conference on Computational Intelligence and Games
Combining manual feedback with subsequent MDP reward signals for reinforcement learning
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Using training regimens to teach expanding function approximators
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Skill acquisition via transfer learning and advice taking
ECML'06 Proceedings of the 17th European conference on Machine Learning
Real-time neuroevolution in the NERO video game
IEEE Transactions on Evolutionary Computation
Learning and interacting in human-robot domains
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Hi-index | 0.00 |
Many different methods for combining human expertise with machine learning in general, and evolutionary computation in particular, are possible. Which of these methods work best, and do they outperform human design and machine design alone? In order to answer this question, a human-subject experiment for comparing human-assisted machine learning methods was conducted. Three different approaches, i.e. advice, shaping, and demonstration, were employed to assist a powerful machine learning technique (neuroevolution) on a collection of agent training tasks, and contrasted with both a completely manual approach (scripting) and a completely hands-off one (neuroevolution alone). The results show that, (1) human-assisted evolution outperforms a manual scripting approach, (2) unassisted evolution performs consistently well across domains, and (3) different methods of assisting neuroevolution outperform unassisted evolution on different tasks. If done right, human-assisted neuroevolution can therefore be a powerful technique for constructing intelligent agents.