Kernel PCA and de-noising in feature spaces
Proceedings of the 1998 conference on Advances in neural information processing systems II
MISEP - Linear and nonlinear ICA based on mutual information
The Journal of Machine Learning Research
Two topographic maps for data visualisation
Data Mining and Knowledge Discovery
Reinforcement learning: a survey
Journal of Artificial Intelligence Research
Stochastic weights reinforcement learning for exploratory data analysis
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
Global Reinforcement Learning in Neural Networks
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
We review two forms of immediate reward reinforcement learning: in the first of these, the learner is a stochastic node while in the second the individual unit is deterministic but has stochastic synapses. We illustrate the first method on the problem of Independent Component Analysis. Four learning rules have been developed from the second perspective and we investigate the use of these learning rules to perform linear projection techniques such as principal component analysis, exploratory projection pursuit and canonical correlation analysis. The method is very general and simply requires a reward function which is specific to the function we require the unit to perform. We also discuss how the method can be used to learn kernel mappings and conclude by illustrating its use on a topology preserving mapping.