On-line learning in changing environments with applications in supervised and unsupervised learning
Neural Networks - Computational models of neuromodulation
Fast curvature matrix-vector products for second-order gradient descent
Neural Computation
Neural network modeling for near wall turbulent flow
Journal of Computational Physics
Learning Bidirectional Similarity for Collaborative Filtering
ECML PKDD '08 Proceedings of the 2008 European Conference on Machine Learning and Knowledge Discovery in Databases - Part I
Monte carlo inference and maximization for phrase-based translation
CoNLL '09 Proceedings of the Thirteenth Conference on Computational Natural Language Learning
A unified approach to minimum risk training and decoding
WMT '10 Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR
Monte Carlo techniques for phrase-based translation
Machine Translation
Monte-Carlo simulation balancing in practice
CG'10 Proceedings of the 7th international conference on Computers and games
Learning bidirectional asymmetric similarity for collaborative filtering via matrix factorization
Data Mining and Knowledge Discovery
A survey of techniques for incremental learning of HMM parameters
Information Sciences: an International Journal
Hi-index | 0.00 |
Gain adaptation algorithms for neural networks typically adjust learning rates by monitoring the correlation between successive gradients. Here we discuss the limitations of this approach, and develop an alternative by extending Sutton''s work on linear systems to the general, nonlinear case. The resulting online algorithms are computationally little more expensive than other acceleration techniques, do not assume statistical independence between successive training patterns, and do not require an arbitrary smoothing parameter. In our benchmark experiments, they consistently outperform other acceleration methods, and show remarkable robustness when faced with non-i.i.d. sampling of the input space.