Self-organization and associative memory: 3rd edition
Self-organization and associative memory: 3rd edition
Introduction to the theory of neural computation
Introduction to the theory of neural computation
Temporal difference learning of backgammon strategy
ML92 Proceedings of the ninth international workshop on Machine learning
The art of computer programming, volume 2 (3rd ed.): seminumerical algorithms
The art of computer programming, volume 2 (3rd ed.): seminumerical algorithms
Adaptive internal state space construction method for reinforcement learning of a real-world agent
Neural Networks - Special issue on organisation of computation in brain-like systems
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Memory-Based Reinforcement Learning: Efficient Computation with Prioritized Sweeping
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Clustering of the self-organizing map
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Current function approximators especially neural networks are often limited in several directions: most of the architectures can hardly be extended with more "informational" capcity, often neural networks with high capacity are too costly in calculation time (especially for an implementaion on a microcontroller of a real world robot) and funtions with high gradients can hardly be learned. The following approach hierarchical vector quantizing algorithm. With this algorithm the calculation time of a classification can decrease down to O(log(n)), where n is the number of implemented prototypes. If a given number of prototypes can not carry the "information" of the function which has to be approximated, the "informational" capacity can be increased by adding prototypes. Proposed in this article the algorithm is tested in a Reinforcement Learning task.