Machine Learning - Special issue on inductive transfer
The spatial semantic hierarchy
Artificial Intelligence
On Bias, Variance, 0/1—Loss, and the Curse-of-Dimensionality
Data Mining and Knowledge Discovery
Evolving neural networks through augmenting topologies
Evolutionary Computation
ECML '00 Proceedings of the 11th European Conference on Machine Learning
Scaling Reinforcement Learning toward RoboCup Soccer
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Keepaway Soccer: A Machine Learning Testbed
RoboCup 2001: Robot Soccer World Cup V
A Taxonomy for artificial embryogeny
Artificial Life
Improving reinforcement learning function approximators via neuroevolution
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
On k-anonymity and the curse of dimensionality
VLDB '05 Proceedings of the 31st international conference on Very large data bases
Comparing evolutionary and temporal difference methods in a reinforcement learning domain
Proceedings of the 8th annual conference on Genetic and evolutionary computation
Compositional pattern producing networks: A novel abstraction of development
Genetic Programming and Evolvable Machines
Cross-domain transfer for reinforcement learning
Proceedings of the 24th international conference on Machine learning
Transfer Learning via Inter-Task Mappings for Temporal Difference Learning
The Journal of Machine Learning Research
Expert Systems with Applications: An International Journal
Transfer via inter-task mappings in policy search reinforcement learning
Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
Performance Evaluation of EANT in the RoboCup Keepaway Benchmark
ICMLA '07 Proceedings of the Sixth International Conference on Machine Learning and Applications
Generative encoding for multiagent learning
Proceedings of the 10th annual conference on Genetic and evolutionary computation
Stochastic optimization for collision selection in high energy physics
IAAI'07 Proceedings of the 19th national conference on Innovative applications of artificial intelligence - Volume 2
A case study on the critical role of geometric regularity in machine learning
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 2
Competitive coevolution through evolutionary complexification
Journal of Artificial Intelligence Research
An experts algorithm for transfer learning
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Evolving coordinated quadruped gaits with the HyperNEAT generative encoding
CEC'09 Proceedings of the Eleventh conference on Congress on Evolutionary Computation
Evolving policy geometry for scalable multiagent learning
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Hi-index | 0.00 |
An important goal for the generative and developmental systems (GDS) community is to show that GDS approaches can compete with more mainstream approaches in machine learning (ML). One popular ML domain is RoboCup and its subtasks (e.g. Keepaway). This paper shows how a GDS approach called HyperNEAT competes with the best results to date in Keepaway. Furthermore, a significant advantage of GDS is shown to be in transfer learning. For example, playing Keepaway should contribute to learning the full game of soccer. Previous approaches to transfer have focused on transforming the original representation to fit the new task. In contrast, this paper explores transfer with a representation designed to be the same even across different tasks. A bird's eye view (BEV) representation is introduced that can represent different tasks on the same two-dimensional map. Yet the problem is that a raw two-dimensional map is high-dimensional and unstructured. The problem is addressed naturally by indirect encoding, which compresses the representation in HyperNEAT by exploiting its geometry. The result is that the BEV learns a Keepaway policy that transfers from two different training domains without further learning or manipulation. The results in this paper thus show the power of GDS versus other ML methods.