A massively parallel architecture for a self-organizing neural pattern recognition machine
Computer Vision, Graphics, and Image Processing
Self-Organizing Cognitive Agents and Reinforcement Learning in Multi-Agent Environment
IAT '05 Proceedings of the IEEE/WIC/ACM International Conference on Intelligent Agent Technology
Self-organizing neural architecture for reinforcement learning
ISNN'06 Proceedings of the Third international conference on Advances in Neural Networks - Volume Part I
Self-Organizing Neural Architectures and Cooperative Learning in a Multiagent Environment
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
TD-FALCON (Temporal Difference - Fusion Architecture for Learning, COgnition, and Navigation) is a class of self-organizing neural networks that incorporates Temporal Difference (TD) methods for real-time reinforcement learning. In this paper, we present two strategies, i.e. policy sharing and neighboring-agent mechanism, to further improve the learning efficiency of TD-FALCON in complex multi-agent domains. Through experiments on a traffic control problem domain and the herding task, we demonstrate that those strategies enable TD-FALCON to remain functional and adaptable in complex multi-agent domains.